More on social epistemology

A few weeks back I wrote about the importance of social learning. Yes, it’s important to try to think clearly and logically where you can, but in practice we’re mostly forced to rely on cues from others to reach our beliefs.

Will Wilkinson makes this point well and in much greater depth in a recent post on conspiracy and epistemology. Along the way he highlights where he breaks from the “rationalist” community:

Now, I’ve come to think that people who really care about getting things right are a bit misguided when they focus on methods of rational cognition. I’m thinking of the so-called “rationalist” community here. If you want an unusually high-fidelity mental model of the world, the main thing isn’t probability theory or an encyclopedic knowledge of the heuristics and biases that so often make our reasoning go wrong. It’s learning who to trust. That’s really all there is to it. That’s the ballgame…

It’s really not so hard. In any field, there are a bunch of people at the top of the game who garner near-universal deference. Trusting those people is an excellent default. On any subject, you ought to trust the people who have the most training and spend the most time thinking about that subject, especially those who are especially well-regarded by the rest of these people.

I mostly agree: this is the point I was trying to make in my post on social learning.

But for the sake of argument we should consider the rationalist’s retort. Like at least some corners of the rationalist community, I’m a fan of Tetlock’s forecasting research and think it has a lot to teach us about epistemology in practice. But Tetlock found that experts aren’t necessarily that great at reaching accurate beliefs about the future, and that a small number of “superforecasters” seem, on average, to outperform the experts.

Is Wilkinson wrong? Might the right cognitive toolkit (probability, knowledge of biases, etc.) be better than deferring to experts?

I think not, for a couple reasons. First off, sure some people are better than experts at certain forms of reasoning, but what makes you think that’s you? I’ve done forecasting tournaments; they’re really hard. Understanding Bayesian statistics does not mean you’re a superforecaster with a track record of out-reasoning others. Unless you’ve proven it, it’s hubris to think you’re better than the experts.

I’d also argue that the superforecasters are largely doing a form of what Wilkinson is suggesting, albeit with extra stuff on top. Their key skill is arguably figuring out who and what to trust. Yes, they’re also good at probabilistic thinking and think about their own biases, but they’re extremely good information aggregators.

And that leads me to maybe my key clarification on Wilkinson. He says:

A solid STEM education isn’t going to help you and “critical thinking” classes will help less than you’d think. It’s about developing a bullshit detector — a second sense for the subtle sophistry of superficially impressive people on the make. Collecting people who are especially good at identifying trustworthiness and then investing your trust in them is our best bet for generally being right about things.

I’d put it a bit differently. If by critical thinking he means basically logical reasoning class then sure I am with him. What you need is not just the tools to reason yourself: you need to learn how to figure out who to trust. So far, so good.

But I wouldn’t call this a “bullshit detector” exactly, though of course that’s nice to have. Another key lesson from the Tetlock research (and I think confirmed elsewhere) is that a certain sort of open-mindedness is extremely valuable–you want to be a “many model” thinker who considers and balances multiple explanations when thinking about a topic.

That’s the key part of social learning that I’d emphasize. You want to look for people who think clearly but with nuance (it’s easy to have one but not both), who seriously consider other perspectives, and who are self-critical. Ideally, you want to defer to those people. And if you can’t find them, you want to perform some mental averaging over the perspectives of everyone else.

Best case, you find knowledgeable “foxes” and defer to them. Failing that, you add a bit of your own fox thinking on top of what you’re hearing.

Doing that well has almost nothing to do with Bayes’ theorem. Awareness of your own biases can, I think, help–though it doesn’t always. And knowledge of probability is often useful. But reaching true beliefs is, in practice, still a social activity. Like Wilkinson says, it’s mostly a matter of trust.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s