Nobel Prize-winning psychologist Dan Kahneman: AI will surpass humans in emotional intelligence

Some economists with whom I work hosted a conference on the economics of AI in Toronto last week, and while I couldn’t attend, some of the videos are online. And Brad Delong has provided a “rough transcript” of psychologist Dan Kahneman’s remarks, which include this bit about emotional intelligence:

[????] said yesterday that humans would always prefer emotional contact with with other humans. That strikes me as probably wrong. It is extremely easy to develop stimuli to which people will respond emotionally. A face that changes expressions, especially if it’s sort of baby-shaped, are cues that will make people feel very emotional. Robots will have these cues. Furthermore, it is already the case that AI reads faces better than people do, and can and undoubtedly will be able to predict emotions and their development far better than people can. I really can imagine that one of the major uses of robots will be taking care of the old. I can imagine that many old people will prefer to be taken care of by robots by friendly robots that have a name and that have a personality that is always pleasant. They will prefer that to being taken care of by their children.

Now I want to end on a story. A well-known novelist—I’m not sure he would he appreciate my giving his name—wrote me some time ago that he was planning a novel. The novel is about a love triangle between two humans and a robot. What he wanted to know is how would the robot be different from the individuals. I propose three main differences:

  1. One is obvious the robot will be much better at statistical reasoning and less enamored with stories and narratives than people.

  2. The robot would have much higher emotional intelligence.

  3. The robot would be wiser.

More from Kahneman at the link. This seems like an underrated possibility to me. Too often commentary on AI assumes that as machines take over analytical tasks, humans will focus on emotional ones. All this reminded me of a piece I wrote for HBR a couple years back that included this bit:

In his own research Gratch has explored how thinking machines might get the best of both worlds, eliciting humans’ trust while avoiding some of the pitfalls of anthropomorphism. In one study he had participants in two groups discuss their health with a digitally animated figure on a television screen (dubbed a “virtual human”). One group was told that people were controlling the avatar; the other group was told that the avatar was fully automated. Those in the latter group were willing to disclose more about their health and even displayed more sadness. “When they’re being talked to by a person, they fear being negatively judged,” Gratch says.

This is something we’d traditionally think of as interpersonal or emotional work, and yet people preferred machines because they didn’t come equipped with certain types of social/emotional reactions. That suggests to me that the bar is even lower than we think. Machines will get better at EQ, but they’ll also be able to respond differently than most people do, in ways that give them an edge. A machine that’s almost as good at listening and perceiving might still be preferable to a human in some cases if the machine also knows, for example, not to be judgmental. A machine that is nearly as good as a human at having a conversation might be preferable in some situations if it’s a more generous conversationalist and asks you more about yourself. AI may one day pass an emotional Turing test. But I expect we’ll start using them in social contexts well before then because of the other advantages they bring.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s