Storytelling, Trust and Bias

One intellectual rule of thumb on which I rely is that one disagrees with Tyler Cowen at one’s peril. Cowen is an economist at George Mason, and is one of the two bloggers at Marginal Revolution, a very popular blog on economics and culture. So while I won’t call what I’m about to write a disagreement, I do want to offer thoughts on Cowen’s TEDx talk on storytelling. (The talk is from 2009, but I just wandered across a transcript of it for the first time today.) Here’s Cowen’s typically interesting premise:

So what are the problems of relying too heavily on stories? …I think of a few major problems when we think too much in terms of narrative. First, narratives tend to be too simple. The point of a narrative is to strip it way, not just into 18 minutes, but most narratives you could present in a sentence or two. So when you strip away detail, you tend to tell stories in terms of good vs. evil, whether it’s a story about your own life or a story about politics. Now, some things actually are good vs. evil. We all know this, right? But I think, as a general rule, we’re too inclined to tell the good vs. evil story. As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your IQ by ten points or more. If you just adopt that as a kind of inner mental habit, it’s, in my view, one way to get a lot smarter pretty quickly. You don’t have to read any books. Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you’re lowering your IQ by ten points or more.

Another set of stories that are popular – if you know Oliver Stone movies or Michael Moore movies. You can’t make a movie and say, “It was all a big accident.” No, it has to be a conspiracy, people plotting together, because a story is about intention. A story is not about spontaneous order or complex human institutions which are the product of human action but not of human design. No, a story is about evil people plotting together. So you hear stories about plots, or even stories about good people plotting things together, just like when you’re watching movies. This, again, is reason to be suspicious.

It is certainly true that we rely heavily on stories. And it’s further true that in doing so we tend towards oversimplification. The human mind doesn’t do well with uncertainty; we seek narrative coherence even where it isn’t justified. And yet stories are deeply useful. They help us process information more easily. In his recent book Thinking, Fast and Slow Nobel prize winning psychologist Daniel Kahneman relied on a kind of story to relate the way we think, through the use of “characters” System 1 and System 2. The former is the intuitive mind, making quick decisions below the level of consciousness. The latter is what we think of as the rational mind, coming to our aid when we consciously reason through something. The dichotomy has some acceptance in the psychology literature (not as a distinction within the brain, but as a theoretical distinction for studying thinking) but some of Kahneman’s colleagues objected to his personification of these “characters.” Here’s Kahneman explaining his conceit:

System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictious characters. Systems 1 and 2 are not systems in the standard sense of entities with interating aspects or parts. And there is no one part of the brain that either of the systems would call home. You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book? The answer is that the characters are useful because of some quirks of our minds, yours and mine. A sentence is understood more easily if it describes what an agent (System 2) does than if it describes what something is, what properties it has. In other words, “System 2” is a better subject for a sentence than “mental arithmetic.” The mind – especially System 1 – appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities. (p. 29)

I believe this is a better way of thinking about stories. It may be true that embedding information in stories generally leads to oversimplification or avoidance of uncertainty. On the other hand, plenty of stories can be nuanced and accurately relate complicated information. But even if storytelling does sacrifice something, it gives us the ability to digest and remember information much more quickly and easily. And the fact is that, in practice, most of us are working with very limited resources (most notably time). We need all the help we can get to process information. If stories can help, that’s generally a good thing.

Yet, I generally agree with Cowen’s point about stories that seem too convenient (especially Michael Moore movies!) But I’d like to propose that, rather than setting up a mental filter to resist certain types of stories, we focus our efforts on the evaluation of the sources of those stories.

Here’s where trust and credibility come in. When Kahneman says he’s going to tell me a story about two characters that make up the mind, I trust that he won’t mislead me, that he won’t overstate his case, or eschew complexity so completely that I’m left with a misguided impression. I believe that he’s trying to help me get a basic grip on very complicated information as best he can given the time I’ve allotted to learn it. That’s because he comes recommended by lots of thinkers whom I respect, and because he’s extraordinarily well credentialed. I find him credible and so I trust him.

I’d urge us all to spend more effort evaluating who we trust – whose stories we’ll buy and whose we’ll treat with Cowen-esque skepticism. And perhaps one metric for assessing credibility would in fact be to apply Cowen’s criteria (does so-and-so constantly tell black-and-white stories?) This seems a more promising path. After all, at this point if either Cowen or Kahneman told me a good vs. evil story I’d believe him.

 

Being told “Be Rational” doesn’t de-bias

More bias research. I’ve been digging in pretty deeply on interventions that help mitigate motivated reasoning and the results aren’t great. There’s self-affirmation, which I discussed in my Atlantic piece, but beyond that it’s pretty thin picking. Motivated reasoning doesn’t track significantly with open-mindedness, and interventions urging participants to be rational seem to have little to no effect. I’d like to see more work on this because I can imagine better pleas (like explaining basically the pervasiveness of bias, or prompting in-group loyalty to those who consider opposing arguments) but for what it’s worth, here is a bit of a paper measuring self-affirmation that also included rationality prompts:

It is of further theoretical relevance that the self-affirmation manipulation used in the present research and the identity buffering it provided exerted no effect on open-mindedness or willingness to compromise in situations heightening the importance of being rational and pragmatic. This lack of impact of selfaffirmation, we argue, reflects the fact that the identity-relevant goal of demonstrating rationality (in contrast with that of demonstrating one’s ideological fidelity or of demonstrating one’s open mindedness and flexibility) is not necessarily compromised either by accepting counter attitudinal arguments or by rejecting them.Both responses are consistent with one’s identity as a rational individual, provided that such acceptance or rejection is perceived to be warranted by the quality of those arguments.The pragmatic implication of the latter finding is worth emphasizing. It suggests that rhetorical exhortations to be rational or accusations of irrationality may succeed in heightening the individuals’ commitment to act in accord with his or her identity as arational person but fail to facilitate open-mindedness and compromise. Indeed, if one’s arguments or proposals are less than compelling, such appeals to rationality may be counterproductive.Simple pleas for open-mindedness, in the absence of addressing the identity stakes for the recipient of one’s arguments and proposals, are similarly likely to be unproductive or even counterproductive. A better strategy, our findings suggest, would be to provide the recipient with a prior opportunity for self-affirmationin a domain irrelevant to the issue under consideration and then(counterintuitively) to heighten the salience of the recipient’s partisan identity.

More discussion of this phenomenon:

Why did a focus on rationality or pragmatism alone prove a less effective debiasing strategy than the combination of identity salience and affirmation—the combination that, across all studies,proved the most effective at combating bias and closed mindedness? Two accounts seem plausible. First, the goals of rationality and pragmatism may not fully discourage the application of prior beliefs. Because people assume their own beliefs to be more valid and objective than alternative beliefs (Armor, 1999;Lord et al., 1979; Pronin, Gilovich, & Ross, 2004; Ross & Ward,1995), telling them to be rational may constitute a suggestion that they should continue to use their existing beliefs in evaluating the validity of new information (Lord, Lepper, & Preston, 1984).Second, making individuals’ political identity or their identity linked convictions salient may increase the perceived significance of the political issue under debate or negotiation. Because identities are tied to long-held values (Cohen, 2003; Turner, 1991),making those identities salient or relevant to an issue may elicit moral concern, at least when peoples’ self-integrity no longer depends on prevailing over the other party

Willpower and belief

I’ve blogged a bunch now about Roy Baumeister’s work on self-control, including the idea that willpower is finite in the short-term, and is depleted throughout the day as you use it. So I feel compelled to post this NYT op-ed claiming something quite different. I don’t know who’s right, but here’s the gist:

In research that we conducted with the psychologist Veronika Job, we confirmed that willpower can indeed be quite limited — but only if you believe it is. When people believe that willpower is fixed and limited, their willpower is easily depleted. But when people believe that willpower is self-renewing — that when you work hard, you’re energized to work more; that when you’ve resisted one temptation, you can better resist the next one — then people successfully exert more willpower. It turns out that willpower is in your head…

…You may contend that these results show only that some people just happen to have more willpower — and know that they do. But on the contrary, we found that anyone can be prompted to think that willpower is not so limited. When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.

I’ll keep my eyes open for a response to this from Baumeister or his colleagues, and let me know if you see one. Meanwhile, this reminded me of a similar phenomenon with respect to IQ:

Yet social psychologists Aronson, Fried, and Good (2001) have developed a possible antidote to stereotype threat. They taught African American and European American college students to think of intelligence as changeable, rather than fixed – a lesson that many psychological studies suggests is true. Students in a control group did not receive this message. Those students who learned about IQ’s malleability improved their grades more than did students who did not receive this message, and also saw academics as more important than did students in the control group. Even more exciting was the finding that Black students benefited more from learning about the malleable nature of intelligence than did White students, showing that this intervention may successfully counteract stereotype threat.

Both of these lines of research suggest that belief matters. Fascinating stuff.

Don’t blog on an empty stomach

http://static.bloggingheads.tv/ramon/_live/players/player_v5.2-licensed.swf

(The clip above covers some basics of mental energy and depletion.)

The alternative title for this post was “I’m hungry; you’re wrong.” I’m not sure which is better… In any case, consider this bit from Kahneman:

Resisting this large collection of potential availability biases is possible, but tiresome. You must make the effort to reconsider your intuitions… Maintaining one’s vigilance against biases is a chore — but the chance to avoid a costly mistake is sometimes worth the effort.

Now as I understand it, this is basically a function of self-control. By taxing your brain to counteract biases, you’re drawing on a finite pool of mental energy. We know from studies of willpower that doing so can cause problems. As John Tierney reported in an excellent NYT Magazine piece on decision fatigue:

Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy.

He also relates a fascinating study of Israeli parole hearings:

There was a pattern to the parole board’s decisions, but it wasn’t related to the men’s ethnic backgrounds, crimes or sentences. It was all about timing, as researchers discovered by analyzing more than 1,100 decisions over the course of a year. Judges, who would hear the prisoners’ appeals and then get advice from the other members of the board, approved parole in about a third of the cases, but the probability of being paroled fluctuated wildly throughout the day. Prisoners who appeared early in the morning received parole about 70 percent of the time, while those who appeared late in the day were paroled less than 10 percent of the time.

It gets more interesting:

As the body uses up glucose, it looks for a quick way to replenish the fuel, leading to a craving for sugar… The benefits of glucose were unmistakable in the study of the Israeli parole board. In midmorning, usually a little before 10:30, the parole board would take a break, and the judges would be served a sandwich and a piece of fruit. The prisoners who appeared just before the break had only about a 20 percent chance of getting parole, but the ones appearing right after had around a 65 percent chance. The odds dropped again as the morning wore on, and prisoners really didn’t want to appear just before lunch: the chance of getting parole at that time was only 10 percent. After lunch it soared up to 60 percent, but only briefly.

So, returning to the Kahneman bit, I wonder if we might observe a similar phenomenon with respect to political bloggers. Would ad hominem attacks follow the same pattern throughout the day? Might bloggers who had just eaten have the mental energy to counter their biases, to treat opponents with respect, etc.? And might that ability be depleted as the time between meals wears on and their mental energy is lowered? This could be tested pretty easily by analyzing the frequency of certain ad hominem clues like, say, the use of the word “idiot”, and then checking frequency against time of day. I’d love to see this data, and not just because I want an excuse to snack while I write.

Algorithms and the future of divorce

In Chapter 21 of Thinking, Fast and Slow Dan Kahneman discusses the frequent superiority of algorithms over intuition. He documents a wide range of studies showing that algorithms tend to beat expert intuition in areas such as medicine, business, career satisfaction and more. In general, the value of algorithms tends to be in “low-validity environments” which are characterized by “a significant degree of uncertainty and unpredictability.”*

Further, says Kahneman, the algorithms in question need not be complex:

…it is possible to develop useful algorithms without any prior statistical research. Smple equally weighted formulas based on existing statistics or on common sense are often very good predctors of significant outcomes. In a memorable example, Daws showed that marital stability is well predicted by a formula:

frequency of lovemaking minus frequency of quarrels

You con’t want your result to be a negative number.

Kahneman concludes the chapter with an example of how this might be used practically: hiring someone at work.

A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

All of this makes me think of online dating. This is an area where we are transitioning from almost entirely intuition to a mixture of algorithms and intuition. Though algorithms aren’t making any final decisions, they are increasingly playing a major role in shaping peoples’ dating activity. If Kahneman is right, and if finding a significant other is a “low-validity environment”, will our increased use of algorithms lead to more optimal outcomes? What truly excites me about this is that we should be able to measure it. Of course, doing so will require very careful attention to the various confounding variables, but I can’t help but wonder: will couples that meet online have a lower divorce rate in 20 years than couples that didn’t? Will individuals who spent significant time dating online be less likely to have been divorced than those that never tried it?

*One might reasonably object that this definition stacks the deck against intuition, and I think this aspect of the debate deserved a mention in the chapter. The focus on “low-validity environments” is the focus on areas where intuition is lousy. So how shocking is it that these are cases where other methods do better? And yet, the conclusions here are extremely valuable. Even though we know that these “low-validity” scenarios are tough to predict, we still generally tend to overrate our ability to predict via intuition and underrate the value of simple algorithms. So in the end this caveat – while worth making – doesn’t really take away from Kahneman’s point.

Fight bias with math

I just finished the chapter in Kahneman’s book on reasoning that dealt with “taming intuitive predictions.” Basically, we make predictions that are too extreme, ignoring regression to the mean, assuming the evidence to be stronger than it is, and ignoring other variables through a phenomenon called “intensity matching.” 

Here’s an example (not from the book; made up by me):

Jane is a ferociously hard-working student who always completes her work well ahead of time.

What GPA do you think she graduates college with? Formulate it in your mind, an actual number.

So Kahneman explains “intensity matching” as being able to toggle back and forth intuitively between variables. If it sounds like Jane is in the top 10% in motivation/work ethic, she must be in the top 10% in GPA. And our mind is pretty good at adjusting between those two. I’m going to pick 3.7 as the intuitive GPA number; if yours is different you can substitute it in below.

Kahneman says this is biased because you’re ignoring regression to the mean, another way of saying that GPA and work ethic aren’t perfectly correlated. so here’s a model to use Kahneman’s trick for taming your prediction.

GPA = work ethic + other factors

What is the correlation between work ethic and GPA? Let’s guess .3 (It can be whatever you think is most accurate).

Now what is the average GPA of college students? Let’s say 2.5? (Again, doesn’t matter).

Here’s Kahneman’s formula for taming your intuitive predictions:

0.3(3.7-2.5)+2.5 = statistically reasonable prediction

So apply the correlation between GPA and work ethic to the difference between your intuitive prediction and the mean, and then go from the mean in the direction of your intuition by that amount.

I played around with some different examples here because my intuition was grappling with some issues around luck vs. static variables, but those aside, this is a neat way to counter one’s bias in the face of limited information.

I can’t help but wonder, though, if the knowledge that this exercise was designed to counter bias led anyone to avoid or at least temper intensity matching. In other words, what were your intuitions for the GPA she’d have after just reading the description of her hard work? Did the knowledge that you were biased lead you to a lower score than the one I mentioned?

Here’s what I’m getting at… If it’s possible (and this is just me riffing right now) to dial down your biases (either consciously or not) when the issue of bias is on your mind, it would seem possible that one’s intuitions could be dialed down going into this exercise, at the point of the original GPA intuition, which could ruin the outcome. Put another way, the math above relies on accurate intensity matching which is itself a bias! If someone were able to come into this with that bias dialed down, they might actually end up with a worse prediction if they also did Kahneman’s suggested process.

Poverty, culture, economics

If you’re at all interested in the science of willpower, self-control, or decision-making (and I am) you really should read John Tierney’s excellent NYT Magazine piece on the subject. Here’s one nugget:

Spears and other researchers argue that this sort of decision fatigue is a major — and hitherto ignored — factor in trapping people in poverty. Because their financial situation forces them to make so many trade-offs, they have less willpower to devote to school, work and other activities that might get them into the middle class. It’s hard to know exactly how important this factor is, but there’s no doubt that willpower is a special problem for poor people. Study after study has shown that low self-control correlates with low income as well as with a host of other problems, including poor achievement in school, divorce, crime, alcoholism and poor health. Lapses in self-control have led to the notion of the “undeserving poor” — epitomized by the image of the welfare mom using food stamps to buy junk food — but Spears urges sympathy for someone who makes decisions all day on a tight budget. In one study, he found that when the poor and the rich go shopping, the poor are much more likely to eat during the shopping trip. This might seem like confirmation of their weak character — after all, they could presumably save money and improve their nutrition by eating meals at home instead of buying ready-to-eat snacks like Cinnabons, which contribute to the higher rate of obesity among the poor. But if a trip to the supermarket induces more decision fatigue in the poor than in the rich — because each purchase requires more mental trade-offs — by the time they reach the cash register, they’ll have less willpower left to resist the Mars bars and Skittles. Not for nothing are these items called impulse purchases.

When we talk about poverty, we inevitably talk about various “cultural” issues, by which we mostly mean “non-economic” issues. Economic improvement can’t pull people out of poverty until we solve various cultural issues that are holding people back, or so the story goes. But we should really look at these as all part of the same cycle. Being poor puts you at a distinct and empirically demonstrable disadvantage when it comes to exerting self-control. Lack of self-control tends to play a large role in life outcomes. Much of what we think of as the “culture” of poverty may in fact be very much an economic issue.