Triangulating the truth, and how journalistic objectivity should work

In a recent study, researchers at Stanford asked 45 people to evaluate the credibility of information on different websites. Their goal was to examine the participants’ ability to discern truth from falsehood. 10 of the participants were PhD historians; 10 were professional fact-checkers; the rest were college students.

The fact-checkers did far better on the tasks the researchers assigned. Most of the historians were indistinguishable from the college students. Of course, a study this small can’t say much with any confidence. But its conclusions are intriguing. The fact-checkers did better because they “read laterally.” As the researchers write:

They employed a powerful heuristic for taking bearings: lateral
reading. Fact checkers almost immediately opened up a series of new tabs on the horizontal axis of their browsers before fully reading the article… In an Internet teeming with cloaked sites and astroturfers (front groups pretending to be
grassroots efforts), taking bearings often assumes the form of lateral reading. When reading laterally, one leaves a website and opens new tabs along a horizontal axis in order to use the resources of the Internet to learn more about a site and its claims. Lateral reading contrasts with vertical reading. Reading vertically, our eyes go up and down a screen to evaluate the features of a site. Does it look professional, free of typos and banner ads? Does it quote well-known sources? Are bias or faulty logic detectable? In contrast, lateral readers paid little attention to such features, leaping off a site after a few seconds and opening new tabs. They investigated a
site by leaving it.

This idea of lateral reading reminded me of a somewhat pithier description of how fact-checkers do their work, from the book Deciding What’s True:

PolitiFact items often feature analysis from experts or groups with opposing ideologies, a strategy described internally as “triangulating the truth.” “Seek multiple sources,” an editor told new fact-checkers during a training session. “If you can’t get an independent source on something, go to a conservative and go to a liberal and see where they overlap.” Such “triangulation” is not a matter of artificial balance, the editor argued: the point is to make a decisive ruling by forcing these experts to “focus on the facts.” As noted earlier, fact-checkers cannot claim expertise in the complex areas of public policy their work touches on. But they are confident in their ability to choose the right experts and to distill useful information from political arguments.

The “lateral readers” from the study were doing something similar, but through internet research instead of interviews. In both cases, the key to evaluating information is to compare multiple sources. And that process involves both comparing different sources’ treatment of the claim and using multiple sources to assess the credibility and trustworthiness of the original source.

The key part of “triangulating the truth,” though, is the word “truth.” Fact-checkers don’t check multiple sources simply to be “balanced” or “neutral.”* That’s in line with Yochai Benkler’s recommendation to journalists, “to shift away from… objectivity as neutrality to objectivity as truth-seeking.”

But these examples illustrate how some of the tools of journalism that were sometimes adopted in part to serve an ideal of objectivity-as-neutrality — like talking to multiple sources from different ideological persuasions — can be re-purposed to serve the goal of objectivity-as-truth-seeking.

At this point, someone might reasonably ask: does “triangulating the truth” work? Does the process of speaking to multiple sources improve one’s chances of reaching the correct answer on empirical matters? The study that I started with hints at that possibility, but as I said it’s too small a sample to put much confidence in. However, I’d argue that “triangulating the truth” and “reading laterally” both accord with another bit of research-backed advice for truth-seeking.

Research on geopolitical forecasting by Philip Tetlock and others has found that the best forecasters consider many perspectives, are actively open-minded, and use a “many-model” approach to thinking about the world. Here’s how I described some of that research:

He found that overall, his study subjects weren’t very good forecasters, but a subset did perform better than random chance. Those people stood out not for their credentials or ideology but for their style of thinking. They rejected the idea that any single force determines an outcome. They used multiple information sources and analytical tools and combined competing explanations for a given phenomenon. Above all, they were allergic to certainty.

(More detail on that work can be found here.)

The methods of Tetlock’s superforecasters aren’t exactly those of the fact-checkers, but they’re similar. Certainly, the superforecasters appear to be lateral readers. They check multiple sources and “triangulate them” — and the evidence shows quite clearly that some people can do much better than others at this skill of triangulation.

For me, this suggests that the fact-checkers’ process has merit, and it points toward a future for journalistic objectivity. If journalists are going to take Benkler’s advice, as I believe they should, they need to reevaluate their processes with an eye toward empirical truth-seeking. We’ve seen that many standard journalistic processes are well-suited to this endeavor, but they can also likely be improved. What might this look like?

One place journalists might look for inspiration is the literature on diversity and decision-making, as well as the (related) literature on why aggregating multiple models often outperforms one model on its own. In both cases, the key insight isn’t simply that adding more perspectives is always useful. It’s that adding additional perspectives is most useful when those perspectives add things the previous perspectives left out. Here’s Scott Page, a professor at Michigan who’s done lots of work in both areas:

With an ensemble of models, you can make up for the gaps in any one of the models. Constructing the best ensemble of models requires thought and effort. As it turns out, the most accurate ensembles of models do not consist of the highest performing individual models. You should not, therefore, run a horse race among candidate models and choose the four top finishers. Instead, you want to combine diverse models.

Viewed through the many-model lens, it makes a certain amount of sense to talk to both a conservative and a liberal when trying to discern the truth about a political claim. However, a traditional “balanced” or “neutral” approach to political journalism would say that if you’re going to talk to four people, talk to two liberals and two conservatives. But the literatures on both diversity and model aggregation would suggest that your third and fourth interviews should be about finding different perspectives and additional information that the conservative and the liberal both lack.

My point isn’t to rewrite anyone’s protocol for sourcing in this post; it’s to suggest that the social science on decision-making has a lot to add as journalism reorients its practice of objectivity away from neutrality toward truth-seeking. The professional fact-checkers already seem to be doing a pretty good job of following that advice.

*For what it’s worth, PolitiFact uses both “neutrality” and “transparency” to describe its approach to objectivity (Deciding What’s True, p. 124). So the organization may not agree completely with Benkler’s suggestion to move away from “neutrality.”

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s