Will this post convince anyone? I’m optimistic.
The Vox policy podcast, The Weeds, had a long segment on bias and political belief this week, which was excellent. I didn’t disagree with anything Ezra Klein, Sarah Kliff, and Matthew Yglesias said, but I think they left out some reasons for optimism. If you can only tell one story about bias, the one they told is the right one. People are really biased, and most of us struggle to interpret new information that goes against our existing beliefs. Motivated reasoning and identity-protective cognition are the norm.
All true. But there are other veins of research that paint a slightly more optimistic picture. First, we’re not all equally biased. Second, it actually is possible to make people less biased, at least in certain circumstances. And third, just because I can’t resist, algorithms can help us be less biased, if we’d just learn to trust them.
(Quick aside: Bias can refer to a lot of things. In this post by I’m thinking only about a specific type. Habits of thought that prevent people from reaching empirically correct beliefs about reality.)
We’re not all equally biased. Here I’m thinking of two lines of research. The first is about geopolitical forecasting, by Philip Tetlock, Barbara Mellers, Don Moore, and others, mostly at the University of Pennsylvania and Berkeley. Tetlock is famous for his 2005 book on political forecasting, but he’s done equally interesting work since then, summarized in a new popular book Superforecasting. I’ve written about that work here and here.
Basically, lots of people, including many experts, are really bad at making predictions about the world. But some people are much better than others. Some of what separates these “superforecasters” from everyone else are things like knowledge and intelligence. But some of it is also about their style of thinking. Good forecasters are open-minded, and tend to avoid using a single mental model to think about the future. Instead, they sort of “average” together multiple mental models. This is all covered in Tetlock’s 2005 book.
What Tetlock and company have shown in their more recent research is just how good these superforecasters are at taking new information and adjusting their beliefs accordingly. They change their mind frequently and subtly in ways that demonstrably correspond to more accurate beliefs about the state of the world. They really don’t look the standard story about bias and identity protection.
Another line of research in this same vein comes from Keith Stanovich at the University of Toronto, who has studied the idea of rationality, and written extensively about how to not only define it but identify it. He also finds that people with certain characteristics — open-minded personality, knowledge of probability — are less prone to common cognitive biases.
There are ways to make people less biased. When I first started reading and writing about bias it seemed hard to find proven ways to get around it. Just telling people to be more open-minded, for instance, doesn’t work. But even then there did seem to be one path: I latched on to the research on self-affirmation, which showed that if you had people focus on an element of their identity unrelated to politics, it made them more likely to accept countervailing evidence. Having been primed to think about their self-worth in a non-political context meant that new political knowledge was less threatening.
That method is in line with the research that the Vox crew discussed — it’s sort of a jujitsu move that turns our weird irrationality against itself, de-biasing via emotional priming.
But we now know that’s not the only way. I mentioned that Stanovich has documented that knowledge of probability helps people avoid certain biases. Tetlock has found something similar, and has proven that you don’t need to put people through a full course in the subject to get the effect. As I summarized earlier this year at HBR:
Training in probability can guard against bias. Some of the forecasters were given training in “probabilistic reasoning,” which basically means they were told to look for data on how similar cases had turned out in the past before trying to predict the future. Humans are surprisingly bad at this, and tend to overestimate the chances that the future will be different than the past. The forecasters who received this training performed better than those who did not.
There are other de-biasing techniques that can work, too. Putting people in groups can help under some circumstances, Tetlock found. And Cass Sunstein and Reid Hastie have outlined ways to help groups get past their own biases. Francesca Gino and John Beshears offer their own list of ways to address bias here.
None of this is to say it’s easy to become less biased, but it is at least sometimes possible. (Much of the work I’ve cited isn’t necessarily about politics, a particularly hard area, but recall that Tetlock’s work is on geopolitical events.)
Identifying and training rationality. So we know some people are more biased than others, and that bias can be mitigated to at least some extent through training, structured decision-making, etc. But how many organizations specifically set out to determine during the hiring process how biased someone is? How many explicitly build de-biasing into their work?
Both of these things are possible. Tetlock and his colleagues have shown that prediction tournaments work quite well at identifying who’s more and less biased. I believe Stanovich is working on ways to test for rationality. Tetlock has published an entire course on good forecasting (which is basically about being less biased) on Edge.org.
Again, I don’t really think any of this refutes what the Vox team covered. But it’s an important part of the story. Writers, political analysts, and citizens can be more or less biased, based on temperament, education, context, and training. There actually is a lot we can do to address the systematic effects of cognitive bias in political life.
If all that doesn’t work there’s always algorithms. I mostly kid, at least in the context of politics, where values are central. But algorithms already are way less biased than people in a lot of circumstances (though in many cases they can totally have biases of their own) and they’re only likely to improve.
Of course, being humans, we also have an irrational bias against deferring to algorithms, even when we know they’re more likely to be right. But as I’ve written about, research has identified de-biasing tricks that help us overcome our bias for human judgment, too.