Editing as humility

A colleague of mine once called editing “a helping profession.” It’s a nice idea that speaks to how different the craft of editing actually is from how people imagine it. There’s a stereotype of the dictatorial editor, assigning stories they want, rejecting others, and creating a whole publication in their image. Maybe somewhere that exists but it’s not been my experience. Most editing is about trying to make someone else’s work better, and I want to share a bit here on how I came to embrace that.

When I showed up at Harvard Business Review as an associate editor in 2013 I was hungry for bylines: all I wanted to do was to write. Partly, I didn’t think much of editing since the internet made it so easy to publish. Why edit when you could, as Clay Shirky put it, “Publish then filter”? And partly I didn’t see a career path: I thought being “out there” with bylines and takes was the way to build a career in digital media. My thoughts on both of those things changed gradually. I came to appreciate the importance of editing and I found that my career was progressing just fine.

But more than that I came to see editing as a form of humility. This is perhaps tied to the kind of work I learned at HBR: editing experts, many of whom didn’t write for the public very often. Editing was a way for me to help really smart, knowledgeable people think and write even better. There was something healthy for me in doing that instead of trying to prove that mine was the smartest take–even though frankly it’s not something I would have sought out. I was brimming with overconfidence, but my work got to be questioning and tinkering and quibbling to help someone else who knew much more than I did.

I still enjoy writing but, as someone who dreamed of being a columnist, I’ve come to be thankful that I learned to be an editor instead.


Derek Thompson has a great Atlantic piece about how the Moneyball-ization of everything has changed culture and sports. The thesis is that analytics push homogenization. I write about data stuff so I should have something thoughtful to say about that but instead I want to veer outside my normal lane and register a basketball take: analytics made the NBA way better.

Here’s Derek:

When universal smarts lead to universal strategies, it can lead to a more homogenous product. Take the NBA. When every basketball team wakes up to the calculation that three points is 50 percent more than two points, you get a league-wide blitz of three-point shooting to take advantage of the discrepancy. Before the 2011–12 season, the league as a whole had never averaged more than 20 three-point-shot attempts per game. This year, no team is attempting fewer than 25 threes per game; four teams are attempting more than 40.

This trend is chronicled in the excellent, chart-filled book Sprawlball which also tends to see it negatively.

But when I started watching the NBA in the ’90s it was way less interesting. It was the just-barely post-Jordan era and every wing just did a lot of iso- 1-on-1 Jordan-imitation stuff. Centers did the post-up equivalent. There was not that much ball movement.

The discovery that 3-point shots were extremely valuable changed all that. When I started watching again a few years back after well over a decade away from the sport I was shocked by how much ball movement there was. The 3-pointer suddenly meant that getting a mediocre player a good shot from outside could be more valuable than just letting your best player go 1-on-1.

Yes, there’s some homogenization in that all teams shoot 3’s. Yes, the mid-range game has faded. And yes there’s a lot of pick-and-roll. But there’s still a good amount of diversity in the skillsets that set those 3’s up. Luka, Giannis, Jokic, and Morant are wildly different players. All of them anchor an offense that involves supporting players shooting 3’s. But the way they set them up is extremely varied and the end result is movement and passing and switching and double-teaming and just lots more excitement (and beauty) than the ’90s NBA.

Anyway, the overall Moneyball take seems right. But basketball got a lot better thanks to analytics.

Scientific understanding

A perspectives piece in Nature on AI and science provides a nice description of scientific “understanding” that I want to share here:

Scientific understanding

Imagine an oracle providing non-trivial predictions that are always true. Although such a hypothetical system would have a significant scientific impact, scientists would not be satisfied. They would want “to be able to grasp how the predictions are generated, and to develop a feeling for the consequences in concrete situations”13. Colloquially, we refer to this goal as ‘understanding’, but what does this really mean? To find criteria for scientific understanding, we seek guidance from the philosophy of science… Numerous philosophers [have tried] to formalize what ‘scientific understanding’ actually means. These proposals suggest that ‘understanding’ is connected to the ability to build causal models (for example, Lord Kelvin said “It seems to me that the test of ‘Do we or not understand a particular subject in physics?’ is, ‘Can we make a mechanical model of it?’”13), connected to providing visualizations (or Anschaulichkeit, as its strong proponent Erwin Schrödinger called it26,27) or that understanding corresponds to providing a unification of ideas28,29.

More recently, Henk de Regt and Dennis Dieks have developed a new theory of scientific understanding, which is both contextual and pragmatic12,13,24. They found that techniques such as visualization or unification are ‘tools for understanding’, thereby connecting previous ideas in one general framework. Their theory is agnostic to the specific ‘tool’ being used, making it particularly useful for application in a variety of scientific disciplines. de Regt and Dieks extended Werner Heisenberg’s insights30 and, rather than merely introducing theoretical or hypothetical ideas, the main motivation behind their theory is that a “satisfactory conception of scientific understanding should reflect the actual (contemporary and historical) practice of Science”. Simply put, they argue that: “A phenomenon P can be understood if there exists an intelligible theory T of P such that scientists can recognise qualitatively characteristic consequences of T without performing exact calculations”12,13. de Regt and Dieks defined two interlinked criteria:

  • Criterion of understanding phenomena: a phenomenon P can be understood if a theory T of P exists that is intelligible.
  • Criterion for the intelligibility of theories: a scientific theory T is intelligible for scientists (in context C) if they can recognise qualitatively characteristic consequences of T without performing exact calculations.

We decided to use this specific theory because it can be used to ‘experimentally’ evaluate whether scientists have ‘understood’ new concepts or ideas, rather than by inspecting their methodology, by simply looking at the scientific outcome and the consequences. This approach also coincides with Angelika Potochnik’s argument that “understanding requires successful mastery, in some sense, of the target of understanding”11.

Scientific discovery versus scientific understanding

Scientific understanding and scientific discovery are both important aims in science. The two are distinct in the sense that scientific discovery is possible without new scientific understanding… Many discoveries in physics occur before (sometimes long before) a theory or explanation, which provides scientific understanding, is uncovered. Examples include the discovery of superconductivity (and its high-temperature version), the discovery of the cosmological microwave background, neutrino oscillations and the discovery of a zoo of particles before the invention of the quark model.

These examples show that scientific discoveries can lead to scientific and technological disruption without directly contributing to scientific understanding11,24.

What causes recessions?

A few different resources explaining the various causes of recessions…

In 2019 I wrote a feature about firms and recessions, and I summed up the causes of recession this way:

Recessions… can be caused by economic shocks (such as a spike in oil prices), financial panics (like the one that preceded the Great Recession), rapid changes in economic expectations (the so-called “animal spirits” described by John Maynard Keynes; this is what caused the dot-com bubble to burst), or some combination of the three. Most firms suffer during a recession, primarily because demand (and revenue) falls and uncertainty about the future increases.

Here are the three categories that a Congressional Research Service report used in a 2019 brief on the causes of recessions:

Recessions can be caused by an overheated economy, in which demand outstrips supply, expanding past full employment and the maximum capacity of the nation’s resources. Overheating can be sustained temporarily, but eventually spending will fall in order for supply to catch up to demand. A classic overheating economy has two key characteristics—rising inflation and unemployment below its “natural” rate…

Asset Bubbles
The last two recessions were arguably caused by overheating of a different type. While neither featured a large increase in price inflation, both featured the rapid growth and subsequent bursting of asset bubbles. The 2001 recession was preceded by the “dot-com” stock bubble, and the 2007-2009 recession was preceded by the housing bubble…

Economic Shocks
They can also be triggered by negative, unexpected, external events, which economists refer to as “shocks” to the economy that disrupt the expansion…
A classic example of a shock is the oil shocks of the 1970s and 1980s.

Here is the IMF:

There are a variety of reasons recessions take place. Some are associated with sharp changes in the prices of the inputs used in producing goods and services. For example, a steep increase in oil prices can be a harbinger of a recession. As energy becomes expensive, it pushes up the overall price level, leading to a decline in aggregate demand. A recession can also be triggered by a country’s decision to reduce inflation by employing contractionary monetary or fiscal policies. When used excessively, such policies can lead to a decline in demand for goods and services, eventually resulting in a recession.

Other recessions, such as the one that began in 2007, are rooted in financial market problems. Sharp increases in asset prices and a speedy expansion of credit often coincide with rapid accumulation of debt. As corporations and households get overextended and face difficulties in meeting their debt obligations, they reduce investment and consumption, which in turn leads to a decrease in economic activity. Not all such credit booms end up in recessions, but when they do, these recessions are often more costly than others. Recessions can be the result of a decline in external demand, especially in countries with strong export sectors.

And here is a bit from David Moss’s A Concise Guide to Macroeconomics, 2nd edition:

Anything that causes labor, capital, or [total factor productivity] to fall could potentially cause a decline in output… A massive earthquake, for example, could reduce output by destroying vast amounts of physical capital. Similarly, a deadly epidemic could reduce output by decimating the labor force…

In some cases, however, output may decline sharply even in the absence of any earthquakes or epidemics… [He cites the Great Depression.] The British economist John Maynard Keynes claimed to have the answer… His key insight, implied by the phrase ‘immaterial devices of the mind’, was that the problem was mainly one of expectations and psychology. For some reason, people had gotten it into their heads that the economy was in trouble, and that belief rapidly became self-fulfilling…. Driven by nothing more than expectations, which Keynes would later refer to as ‘animal spirits,’ the economy had fallen into a vicious downward spiral…

Starting around the time of Keynes, therefore, economists began to realize that there was more to economic growth than just the supply side. Demand mattered a great deal as well, particularly since it could sometimes fall short. In fact, over roughly the next 40 years, it became an article of faith among leading economists and government officials that it was the government’s responsibility to ‘manage demand’ through fiscal and monetary policy.

p. 22-26

There’s one other bit of Moss’s book that’s worth mentioning here. It’s an incredibly slim volume and so he faces the question of how to organize such a brief survey of macroeconomics. He chose to use three overarching topics: Output, money, and expectations.

His whole discussion of recessions in the Output section, because a recession is a sustained decline in economic output. But you could break the causes of recessions into his three buckets: Output = shocks to labor, capital, or productivity. Money = financial crises or policy-induced shifts to interest rates or the money supply. Expectations = Shifts in collective psychology that lower demand.

With all this sketched out, one of my biggest questions is how to think about 2000. CRS buckets it with 2008 as the bursting of an “asset bubble.” That feels strange to me. It’s true that both involved financial markets and asset bubbles. But 2000 seemed like a shift in expectations, that then led to an asset bubble bursting and a fairly mild recession. In 2008, an asset bubble was part of the story. But the real story was a financial crisis sparked by opacity and complexity.

I think if I were writing a paragraph on the causes of recession today I’d go with four buckets:

  1. Economic shocks to productive capacity
  2. Policy-induced declines in demand (monetary or fiscal policy)
  3. Changes in expectations (“animal spirits”)
  4. Financial crises

You could think of 2000 as sort of straddling 3 and 4 whereas the global financial crisis was really all 4. The Covid recession was 1. And if we have a recession in the next year it’ll be firmly in 2. Honestly, 3 feels like the rarest one, but I think that’s because most of the time it overlaps with either 2 or 4. Shifts in investor sentiment span 3 and 4, depending on the type of investors and the type of asset. Shifts in demand now almost always relate to 2, since policymakers have internalized Keynes’ insight and actively manage it.

You could also try breaking it up into:

  1. Supply shocks
  2. Demand shocks (whether via policy or shifts in “animal spirits”)
  3. Financial crises

The Fed can’t do it all

In this post, I want to clip together a few different threads about central banks and inflation–and specifically the idea of how you might fight inflation beyond interest rates.

In my new project, Nonrival (a full post about that later but sign up!), I covered the Bank of England this week.

I wrote:

Central banks, for all their faults, have mostly spent the last 15 years trying to do their jobs. Legislators have frequently made those jobs harder, including by embracing austerity in the 2010s. But central banks can only do so much—to boost an economy, or even to fight inflation. At some point investors start to ask: What about the other guys?

Today Ezra Klein goes in a similar direction:

Inflation is a scourge, but interest rates are a blunt tool… I’m more interested in another feature of the progressive consumption tax: the ability to dial it up and down to respond to different economic conditions. In a time of recession, we could drop taxes on new spending, giving the rich and poor alike more reason to spend. In times of inflation, we could raise taxes on new spending, particularly among the wealthy, giving them a concrete reason to cut back immediately and to save and invest more at the same time.

In August, Joe Stiglitz and Anton Korinek put out a whitepaper arguing that monetary policy was being asked to do too much in stabilizing the business cycle.

And at Vox, Emily Stewart interviews Nathan Tankus about whether the Fed is really able to tackle inflation on its own. (I don’t love the “is the Fed a scam” framing but there are some really interesting ideas covered here):

Part of our issue is that we haven’t had other agencies who have been explicitly given the task to think about inflation, to preemptively respond to problems that could potentially cause inflation… We’ve been sold this lie that we can give this one agency responsibility for managing the economy, inoculated from politics, and then it’ll work out best for everyone.

To me the idea here isn’t that you ask the Fed to do less. And my feeling is that while central banks have their problems, they’ve had a pretty good track record relative to other forms of policymaking for the last 15 years.

But it’d be nice if they got a little help. You can’t run an entire economy via monetary policy.

Randomization is good

Researchers used LinkedIn to study the economic power of “weak ties.” It’s a fascinating topic and you can read about the study here. The New York Times reported on the study with the headline “LinkedIn Ran Social Experiments on 20 Million Users Over Five Years.” To put it charitably, that’s a very weird reaction.

Stepping back for a second… People are rightly worried about the power large online platforms have to influence their behavior. That topic can head down a philosophical rabbit hole, as it invites questions about agency and free will. But it’s a perfectly reasonable thing to worry about: Code is a form of governance, and we ought to question the ways companies govern our online spaces.

But that’s happening all the time! Once you become concerned about companies influencing behavior you’ll see it all around you. There’s nothing special about an A/B test in this regard. Randomized experiences are just one way that influence might happen.

However, there’s something nefarious in the colloquial meaning of someone “experimenting on you.” That sounds especially bad. The thing is that the colloquial meaning doesn’t have to involve randomization: If Facebook just tweaks how the news feed works in order to get you to click more ads, that can be a scary form of influence–and meet the colloquial definition of “experimenting” on you–even if they don’t run a randomized control trial.

And then there’s the fact that in this case, the randomization is part of the sort of thing you might hope platforms would do: Run experiments in collaboration with researchers, who no doubt gave consideration to the ethical considerations involved, in order to create new knowledge.

Of all the ways platforms might influence us, this seems like one of the least nefarious. But the word “experiment” carries a weight that then can make this seem like a uniquely worrisome thing. Worry about the influence, not the academic experiments.

Side doors

“Well shit, we just need to blog more.”

The Verge has been redesigned. That’s from the editor-in-chief, Nilay Patel, who goes on to say:

When you embark on a project to totally reboot a giant site that makes a bunch of money, you inevitably get asked questions about conversion metrics and KPIs and other extremely boring vocabulary words. People will pop out of dark corners trying to start interminable conversations about “side doors,” and you will have to run away from them, screaming.

But there’s only one real goal here: The Verge should be fun to read, every time you open it. If we get that right, everything else will fall into place.

That stood out to me because I used to say my whole job was walking into rooms and explaining to people I worked with that the audience came in almost entirely through side doors. By that I meant mostly social media, and search engines, messaging apps, and emails you didn’t send. Almost no one came to your homepage, so if you wanted to attract readers you needed to understand side doors. This wasn’t novel on my part at all; it was conventional wisdom. But as a “digital” editor focused on social media and audience development I gave this pitch a lot.

And I ended by saying: You can’t change this. You can build the nicest homepage in the world but people still will mostly come through side doors.

But a lot of those side doors just turned out to be bad — for publishers but for readers and for public discourse, too. Algorithmic social media increasingly feels like a wrong turn. It’s not good for us.

So I’m glad to see The Verge trying to build its own feed and attract more readers directly. It probably was the case that, when I was talking about side doors in the mid 2010s, that most small- and medium-sized publishers couldn’t on their own do much to change those dynamics. It’s not usually a good idea to stand athwart audience habits yelling “Stop!”

But not every audience habit is a step in the right direction either, and even trends that seem inevitable can shift over a period of years.

Side doors are part of what makes the internet so great: You jump from one thing to the next, following a chain of links to places you might not have expected. But many of the side doors we’ve actually built are bad. I hope publishers like The Verge can convince more readers to try the front door for a change.

Some intellectual influences

I’ve been thinking about the perspectives and schools of thought that I came to in my formative years that, for better or worse, have shaped how I think about a wide range of things. I thought it’d be useful to sketch those out, if only for myself. They’re quite different from each other — some are schools of thought or intellectual subfields, some are hazy ideas that resist easy definition.

Two are subfields of economics: behavioral economics and the economics of ideas. One is a philosophical tradition: American pragmatism. And two are related to the internet: what I’m calling the “Berkman perspective,” a set of ideas from internet scholars in and around the Berkman-Klein Center; and the “wonkosphere,” the policy blogging world that thrived during the Obama years, of which I was a voracious reader.

Behavioral economics

I’ve been deeply influenced by both the popular writing and research on cognitive bias, decision making, and forecasting. In a nutshell, I take this work to suggest:

The economics of ideas

I’ve always been drawn to the topic of innovation and how it happens and have enjoyed a number of great books on this topic. In 2015 I had the privilege to audit a PhD course at MIT Sloan on the economics of ideas which helped me go deeper on this subject. In terms of what I’ve come to take away from this field:

  • New ideas are the driver of long-run economic growth, so few questions are as important as the question of how ideas are produced, aka how innovation happens
  • Ideas are an unusual kind of economic good because they’re nonrival. Some of the big assumptions that drive how we’d normally think about markets don’t apply
  • Macro-theoretical models of ideas-driven growth are useful but the economics of ideas is and should be a deeply empirical field: We should look at incentives and markets and property rights but also culture and institutions to understand how innovation happens

The Berkman view

This one was the hardest to name, but I remain deeply influenced by a set of scholars and writers who studied the internet in the early 2000s and 2010s. Many, though certainly not all, were associated at some point with the Berkman Klein Center for Internet & Society at Harvard. And I’m proud to have spent part of a semester studying AI ethics and governance there as an Assembly Fellow in 2019. If I were to sum up what I took from a range of thinkers, it’s this:

  • The internet was a big deal and worth studying. This seems obvious now, but it wasn’t obvious to many people even 15 years ago.
  • “Code is law,” meaning software can shape behavior. It can be a form of governance, limiting what we can do or pushing us toward a particular decision.
  • The internet could enable new and potentially better forms of cooperation and communication.

The “wonkosphere”

It’s hard to overstate how much I was influenced by the policy and economics blogging world of the late 2000s and early 2010s. In terms of what I took from it:

  • Argument produces knowledge. The back-and-forth within this world wasn’t just fun to read, it showed a way to think and debate and learn (not always good of course!)
  • The internet is an amazing tool for research. Reading the wonkosphere convinced me that a good faith reading of the massive quantity of high-quality information available online often produces better journalism than so-called “neutral” reporting
  • There is a technocratic approach to politics that, for better or worse, musters evidence and arguments about policy questions and is skeptical of overarching ideology as a way of choosing between policies


This last one I’ll just turn over to the intro from the Stanford Encyclopedia of Philosophy:

Pragmatism is a philosophical tradition that – very broadly – understands knowing the world as inseparable from agency within it. This general idea has attracted a remarkably rich and at times contrary range of interpretations, including: that all philosophical concepts should be tested via scientific experimentation, that a claim is true if and only if it is useful (relatedly: if a philosophical theory does not contribute directly to social progress then it is not worth much), that experience consists in transacting with rather than representing nature, that articulate language rests on a deep bed of shared human practices that can never be fully ‘made explicit’.

Behavioral economics in one chart

It’s sometimes claimed, not entirely unreasonably, that the research on cognitive biases amounts to an unwieldy laundry list. Just look at how long the list of cognitive biases is on Wikipedia. This frustration is usually paired with some other criticism of the field: that results don’t replicate, or that there’s no underlying theory, or that it’s a mistake to benchmark the human brain against the so-called “rational ideal.”

I’m not very moved by these critiques; if the list of biases is long, it’s partly because the psychology of heuristics and biases is appropriately a very empirical field. An overarching theory would be nice, but only if it can explain the facts. The whole point of behavioral economics was to correct the fact that economists had let theory wander away from reality.

Still, I’ve been thinking recently about how to sum up the key findings of behavioral science. What’s the shortest possible summary of what we know about bias and decision making?

Enter Decision Leadership by Don Moore and Max Bazerman. They wrote the textbook on decision making, and in this book they offer advice on how to make good decisions across an organization. I’ve had the pleasure to work with them and I recommend the book.

What I want to share here isn’t their advice, but their succinct summary of decision biases, from the book’s appendix. It’s the best synthesis of the field I know of:

p. 196

Here’s a bit more.

The human mind, for all its miraculous powers, is not perfect. We do impressively well navigating a complex world but nevertheless fall short of the rational ideal. This is no surprise–perfect rationality assumes we have infinite cognitive processing capacity and complete preferences. Lacking these, we adapt by using shortcuts or simplifying heuristics to manage challenges that threaten to exceed our cognitive limitations. These heuristics serve us well much of the time, but they can lead to predictable errors. We group these errors into four types based on the heuristics that give rise to them.

p. 195

The availability heuristic

The first is the availability heuristic, which serves as an efficient way of dealing with our lack of omniscience. Since no one knows everything, we rely instead on the information that is available to us… Reliance on the availability heuristic gives recent and memorable events outsize influence in your likelihood judgments. After experiencing a dramatic event such as a burglary, a wildfire, a hurricane, or an earthquake, your interest in purchasing insurance to protect yourself is likely to go up… The availability heuristic leads us to exaggerate vivid, dramatic, or memorable risks–those we can easily retrieve from memory… The availability heuristic biases all of us toward overreliance on what we know or the data we have on hand. Sometimes information is easier to recall because it is emotionally vivid, but other information is privileged simply due to the quirks of human memory… You can reduce your vulnerability to the availability bias by asking yourself what information you would like to have in order to make a fully informed decision–and then go seek it out.

p. 195-198

The confirmation heuristic

The second is the confirmation heuristic, which simplifies the process by which we gather new information… One of the challenges that impedes us in seeking out the most useful evidence to inform our decisions is that confirmation is so much more natural than disconfirmation… Even our own brains are better at confirmation than disconfirmation. We are, as a rule, better at identifying the presence than the absence of something. Identifying who is missing from a group is more difficult than determining who is present. That has the dangerous consequence of making it easier to find evidence for whatever we’re looking for… Confirmation can bias our thought processes even when we are motivated to be accurate. It is even more powerful when ti serves a motivation to believe that we are good or virtuous or right. Virtue and sanctimony can align when we defend our group and its belief systems. The motivation to believe that our friends, leaders, and teachers are right can make it difficult to hear evidence that questions them… The automatic tendency to think first of information that confirms your expectations will make it easy for you to jump to conclusions. It will make it easy for you to become overconfident, too sure that the evidence supports your beliefs… If you want to make better decisions, remind yourself to ask what your critics or opponents would say about the same issue.

p. 195, 199-202

The representativeness heuristic

The third is the representativeness heuristic, which stands in for full understanding of cause and effect relationships. We make assumptions about what causes what, relying on the similarity between effects and their putative causes…

p. 195

This is a tricky one so I want to step outside the book for a second and supplement the definition above with the definition from the American Psychological Association:

representativeness heuristic: a strategy for making categorical judgments about a given person or target based on how closely the exemplar matches the typical or average member of the category. For example, given a choice of the two categories poet and accountant, people are likely to assign a person in unconventional clothes reading a poetry book to the former category; however, the much greater frequency of accountants in the population means that such a person is more likely to be an accountant.

The framing heuristic

Finally, framing helps us establish preferences, deciding how good or bad something is by comparing it to alternatives… Context frames our preferences in important ways. Frames drive our choices in ways that rational theory would not predict. We routinely behave as if we are risk averse when we consider choices about gains but flip to being risk seeping when we think about the same prospect as a loss. This reversal of risk preferences owes itself to the fact that we think about gains and losses relative to a reference point–usually the status quo or, in the case of investments, the purchase price… One [consequence] is the so-called endowment effect, our attachment ot the stuff we happen to have… The endowment effect can contribute to the status quo bias, which leads us to be irrationally attached to the existing endowment of possessions, privileges, and practices.

p. 195, 205-209

Summing it up

So, what’s the shortest version of behavioral economics and decision making? We are not perfectly rational and in particular we’re intuitively quite bad at statistical and probabilistic thinking. We rely heavily on the information we can easily recall, and we look for reasons to confirm what we already think–especially when doing so protects our self-conception or our group’s status. When we think about cause and effect, we rely on the information that we can easily recall, we don’t think carefully about probability and counterfactuals–and instead, we think in terms of stories and archetypes. We put things into categories and construct causal narratives based on the category. And we are creatures of context: Our frame of reference can sometimes shift based on what is made salient to us, and we are often especially attached to the status quo.

There are a lot of narrower biases within these four heuristics, and no doubt there’s plenty to quibble with in any specific taxonomy like this one. But in my book that’s a pretty decent starting point for summing up a wide set of empirical work, and it clearly helps explain a lot of how we think and how we decide.

A causal question

A good tweet:

Of course, putting this question to good use requires judgment. There are no iron rules for mapping a causal claim to a prediction about large-scale data in the messy real world. And there are always a million ways you can explain why the causal claim is still true even if the predicted real-world effect doesn’t turn up. But it’s still a great question. The point of causal analysis is, ultimately, to make claims about the wider world and how it behaves, not just to predict the outcomes of RCTs.