Psychology To Win A Political Argument?
Most of us are guilty of chalking up the position of the opposing political party to sheer stupidity. Yet, as confounding as it is, they do have a method to their madness. Social psychologists have pinpointed why some people don’t see the world as we do. And, from that, they developed a persuasive tool called moral reframing, that allows you to effectively communicate your position to people with opposing political views. On this episode, we will be joined by Jan Voelkel, a PhD candidate at the Department of Sociology at Stanford, who has authored multiple papers on political ideology, to discuss moral reframing.
Episode Transcript
Anita Kirti (00:00):
So today we're going to talk about the moral psychology of political ideology. So the method to the madness of both conservatives and liberals, and how by understanding it, you can actually bridge the gap with someone you don't necessarily agree with, or in today's world, someone that you have blocked on Facebook. And don't worry, you don't have to bear with my voice for this entire podcast. We actually will be joined by Jan Voelkel, a PhD candidate who studies political persuasion at the Department of Psychology at Stanford. He was the author on most of the papers that this podcast was drawn from. So his insight is super important and I learned a lot and I hope you do too. So if you want to learn about that, just keep listening. This is The World We Inherit, and I'm your host Anita Kirti. Thanks for listening.
Anita Kirti (00:54):
So the past four years in America has been extremely divisive, but one thing that actually is common amongst all of us is that we have this inability to comprehend or understand what the other side is saying. I know I have been one of those people, and I think that's a huge misstep as far as liberals go, because we're writing off an entire group of people that are not entirely made up of racists and bigots. And I don't think it's particularly beneficial to our cause if you find things like climate change and systemic racism upsetting to you. You have to acknowledge that those are problems that can't be addressed unless you can convince a large number of people to work for that cause. And I completely, completely understand, and I'm going to preface this entire podcast with this, that it is asking a lot of people to have to argue for their own humanity. And I understand that. I feel that all the time.
Anita Kirti (01:56):
I go back and forth between the two of trying to be understanding, but at the same time, why the hell am I explaining to you why I belong in this country? I get that. But I think today's exercise for the podcast should be to try to be open-minded and understand other people. So I'm going to stop rambling. Let's get into the science.
Anita Kirti (02:28):
If you've ever argued with someone about political issues, there comes a point where you explain your reasoning, and they explain theirs, but you both end up at a different conclusion. And the reason for that is that everyone bases their arguments on different values. A good example would be abortion. Usually the value that liberals hold is equity and women's rights, while conservatives hold the value of sanctity of life. So even when you agree on the same set of facts, you will not necessarily reach the same conclusion. So in moral psychology, there is a concept called the moral foundations theory. It sets out five different moral values that are used to justify certain arguments. And these five moral values are actually universal to all cultures. They arose as ways to build collaboration amongst social groups thousands of years ago. So the theory goes on to say that the moral conflicts people have is because of the different values that we personally endorse that clash with our opponent who gives more importance to a different value.
Anita Kirti (03:38):
Let's first go over these five moral foundations so I can give you a better understanding. Okay, number one is the care foundation. And it has to do with preventing suffering of others. So a common trait that's associated with the care foundation is something like justice. And two, the fairness foundation, this also relates to justice equality, and anti-discrimination, it was an evolved trait for us that helped us cooperate thousands of years ago. And that brings us to number three, loyalty. This emphasizes how important someone's in-group is. In-group is the group that you're associated to. And those who are not in that group are called the out-group. It's very self-explanatory. Loyalty is prioritizing your in-group. So ideas like trust and patriotism and sacrifice are all things that stem from this foundation. And it would be considered against this foundation to criticize the group or engage in dissent.
Anita Kirti (04:36):
Okay. So the next one, number four, authority. This is a foundation that is associated with respect for people in higher social rankings, and also has to do with practicing tradition. And the last one is sanctity. So this has to do with the idea of purity, and overall just avoiding things that people would find disgusting. An example of this would be something like stepping on a flag, somebody who values sanctity would be offended by that. Okay, so those were the five. You don't have to memorize them. I'm going to say them a hundred times later in this podcast, so don't worry about that, but let's kind of group them so that they're easier to keep track of. The care and fairness foundation are considered individualizing foundations. So it's based on each individual, fairness for each individual, caring for another individual. And the other three, loyalty, authority, and sanctity are considered binding foundations, meaning that they have more to do with the group.
Anita Kirti (05:36):
And I'm sure you guessed with that explanation where we're going, liberals are more associated with the individualizing foundations. They tend to make their argument on the care and fairness foundations. Whereas conservatives make their arguments mostly on the binding foundations of loyalty, authority, and sanctity. And in moral psychology, this is called the moral divide hypothesis. There's a lot of research on the separation between the two, but a kind of overall idea is that conservatives see existent institutions as things that have protected people for a long time, so they should be preserved; while liberals are more about getting societal gains through constantly improving the institutions around them. So I think of the values that each group has as kind of the lens through which they see the world. So if I'm looking at the immigration issue from the perspective of the care value, I would be more open to accepting refugees.
Anita Kirti (06:36):
And when I see that someone from the other side opposes that, I would see them as uncaring, but from their perspective, they are seeing it as maintaining the status quo. So the takeaway from the moral divide hypothesis is that the differences between liberals and conservatives from anywhere in the world, this can be in Brazil or in India or South Korea comes from the moral lens through which they look at issues. And all of that being said, I am sure you are wondering what foundations do Trump supporters value the most. And I'm mentioning this because this case is unique in that he was able to swing both moderates and conservatives. And he is in no way a traditional conservative. So we can't really put him in that box. So let me answer that question. A public opinion study done by the Cato Institute, which is a right leaning think tank done in November of 2015 scored which foundations are valued by the supporters of certain political figures.
Anita Kirti (07:39):
They found that voters who support Trump score high on authority, loyalty, and sanctities, So our three binding foundations, which explains why many Republicans voted for him. But they also score low on care, which is different from supporters of people like Mike Huckabee. So this combination of both high authority, loyalty, and sanctity, and a low score on care points to an authoritarian ideation in that they value obedience, and they tend to score low on compassion. And this also accounted for demographics. Politico also found that same thing, that an authoritarian inclination was the singular predictor for Trump's support. And this too was controlled for race, education, gender, and age. That was a lot. Let's take a break here, go grab a snack or something. And when we get back, I'm going to discuss how we can use the moral divide hypothesis as a tool to change someone's mind.
Anita Kirti (08:49):
What we learned in the first part about the five moral foundations and the divide hypothesis, we are going to talk about a tool for political persuasion, and it's called moral reframing. It proposes that you take your policy position, and frame it in a way that's consistent with another person's moral values. So showing them how their own foundations can actually bring them to the same conclusion that you ended up at. And I'm going to stop talking about this here. I'm going to let someone who is much more qualified on this topic and wrote many a paper on it to walk you through how this works. So this is an interview I did with Jan Voelkel from the Department of Psychology at Stanford.
Anita Kirti (09:35):
I'm here with Jan Voelkel, who's a PhD student at the Department of Sociology at Stanford University. Thank you for joining us.
Jan Voelkel (09:44):
Yeah. Thanks for having me.
Anita Kirti (09:46):
So let's just get right into it. At this point in the podcast, we've already gone over the moral foundations theory. And so we're going to just directly start at what is moral reframing?
Jan Voelkel (09:55):
So moral reframing is the idea that some argument in the political domain are typically framed in a particular array. And by framed in a particular way, I mean that certain political positions often that's associated with certain moral values. So for instance, the fight for the right to marry for gays and lesbians is often argued from a case for fairness and equality. And moral reframing is basically the idea that you can't arrive at the same political position by starting from very different moral values and moral angles. The consequence of that idea is basically that it is possible to persuade people with a different set of moral beliefs than a certain political position is consistent with their moral values as well. For instance, to stay with the example of same-sex marriage, you can also argue that there are a lot of gays and lesbians serving in the US military. So it is patriotic and American to support that gays and lesbians have the right to marry.
Anita Kirti (11:08):
Or if you're a libertarian, it would be less government, small government shouldn't be deciding who marries. Would that make sense?
Jan Voelkel (11:19):
Yeah, we haven't examined or tested, but that would make total sense from a theoretical point of view.
Anita Kirti (11:21):
I'm going to step in here for a second and give you an idea of actually how effective moral reframing is. And this is a study that Jan Voelkel himself had done, and it's titled Morally Reframed Arguments Can Affect Support For Political Candidates. And they found that conservatives who read a message that opposed Trump, and it was through the lens of the loyalty foundation actually supported him less than the conservatives who read a message that also opposed Trump, but was based on liberal concerns of fairness. And they even found the same thing when they did this with liberals and Hillary Clinton. Okay. Let's get back to the interview.
Anita Kirti (11:58):
Considering how hyperpolarized we're becoming, how effective will moral reframing become as a tool when people are increasingly seeing each other as you're either a part of us or not, how is that going to work moving on?
Jan Voelkel (12:12):
I mean, then there is basically no way to use moral reframing. And if you talk to people from the other side, you also need to be motivated to frustrate them. I also think that it's fair on some arguments you made, your take on this should be respected too. And the moral values that you care about are the right moral values to care about. But there is also another take on this, which is basically do you want other people to support the kind of policies that I support. And in order to show the other side that I am a respectful conversation partner and have taken their perspective on this too, I may try to like present arguments that are more in line with their perspective.
Anita Kirti (13:00):
We're seeing a lot of disinformation that you see from Iran and China and all these different other state actors that are trying to meddle with the election. How does moral reframing work with false information?
Jan Voelkel (13:11):
Our latest paper, we have a paragraph at the very end where we try to be very clear in terms of saying morality or moral reframing itself very much depends on how you're using it. Whether you are transparent about what you are doing, and I'm sure that there are a lot of examples how moral reframing has been used in the past by governments who engage in immoral actions.
Anita Kirti (13:34):
Yeah. I was wondering about that. And I was like, I don't know if those two things work together. Are there any values that people have that make them more inclined to accept or be susceptible to misinformation?
Jan Voelkel (13:44):
That's an interesting question. We haven't tested whether people with certain moral values are more susceptible or less susceptible to misinformation. I do think that it would be reasonable to think that we would all be more susceptible for misinformation that are consistent with our moral values, basically. You've got two different worlds.
Anita Kirti (14:06):
Okay. So last question. Do you have any tips in general for how to speak to people who you don't necessarily agree with?
Jan Voelkel (14:13):
I would like to think very carefully about one of my goals in the conversation, do I want to make a case how I think about this issue, or do I want to present a new perspective to my conversation partner and try to persuade them off a certain position? And if the latter is the case, I would try to be transparent and be like, no, this is not my own moral values, but from a new perspective, I think this is an argument that may make a lot of sense from your perspective. That way, try to make a good case. My, I'm calling in friends here, at Stanford, Louisa [inaudible 00:14:53] suggest that while receiving empathy is useful, we are in general, better able to make persuasive arguments to the other side. We were often wrestling with whether it's good to take the other side's perspective or not, and her reasons suggest that perceiving empathy or perspective taking as useful actually helps us make better arguments. You could see perspective taking specifically in terms of moral perspective taking. So that might be a useful tip as well.
Anita Kirti (15:26):
Actually, those are really helpful. Now at this point, I think you have to go out of your way to find people that really are diametrically opposite to you on the political scale, but it's still a worthy effort.
Jan Voelkel (15:36):
Okay, good. People like you who do their own podcasts, they have potential reaching out to people whom we don't even know and who we would have never expected to one day hear us talking about.
Anita Kirti (15:52):
Yeah, that's true. So one step at a time. Thank you for being on the podcast. It was really nice having you, learned a lot.
Jan Voelkel (15:55):
Yeah, thank you so much for having me, and I hope you continue with your podcast.
Anita Kirti (15:58):
Bye.
Jan Voelkel (15:58):
Bye.
Anita Kirti (16:04):
So that is moral reframing. And I wanted to just go over the takeaways of the day. I feel like a professor. So we went over the morality foundations theory, which were the five foundations. Then we also went over the moral divide hypothesis, which explained that liberals make arguments on the care and fairness foundation while conservatives make arguments, mostly on authority, sanctity and loyalty. And most importantly, we talked about moral reframing and how to use it. So, yeah, that's it. I hope this is helpful, and you use it the next time you are talking to someone about contentious political issues.
Anita Kirti (16:44):
I think I'm going to have to make a part two of this episode because I have so much more in the interview that I wanted to add in here, but we didn't get a chance to, and I'm going to practice this tool by having a discussion with a Trump supporter to see how this works in real life. In theory, this seems pretty cool, but let's see if I can actually do this. So that's going to be next episode. Go ahead and subscribe so that you can listen to part two, and thank you for listening. I'm your host Anita Kirti and this is the World We Inherit.