Paul Romer on theory

In a great post defending economist Lisa Cook’s appointment to the Fed Board of Governors, Nobel-winner and famed theorist Paul Romer gets into the role of theory vs. empirics in social science:

There is a role for the type of theory that John [Cochrane, another theorist to whom he is responding…] and I do. Theorists build tools. Some of these tools turn out to be useful because they fit the facts. Many do not. Little harm comes from positing an imaginative new theory that turns out to be wrong provided that empiricists check it against the facts before someone uses it to make an important decision. John is a glider pilot so he understands the importance of this intellectual division of labor. When a passenger plane runs out of fuel – and yes, this can happen – neither John nor I would rely on a theorist skilled in computational aerodynamics to choose between landing at a nearby airport that has a short runway or a more distant one with a longer runway. We’d both want to give the judgment call about where to land to someone who knows the facts about the landing speed and glide ratio of the plane.

William James on certainty

From The Will to Believe in 1896:

Objective evidence and certitude are doubtless very fine ideals to play with, but where on this moonlit and dream-visited planet are they found? I am, therefore, myself a complete empiricist so far as my theory of human knowledge goes. I live, to be sure, by the practical faith that we must go on experiencing and thinking over our experience, for only thus can our opinions grow more true; but to hold any one of them — I absolutely do not care which — as if it never could be reinterpretable or corrigible, I believe to be tremendously mistaken attitude, and I think that the whole history of philosophy will bear me out…

…But please observe, now, that when as empiricists we give up the doctrine of objective certitude, we do not thereby give up the quest or hope of truth itself. We still pin our faith on its existence and still believe that we gain an ever better position towards it by systematically continuing to roll up experiences and think. Our great difference from the scholastic lies in the way we face. The strength of his system lies in the principles, he origin, the terminus a quo of his thought; for us the strength is in the outcome, the upshot, the terminus ad quem. Not where it comes from but what it leads to is to decide. It matters not to an empiricist from what quarter an hypothesis may come to him: he may have acquired it by means fair or foul; passion may have whispered or accident suggested it; but if the total drift of thinking continues to confirm it, that is what he means by its being true.

Pragmatism: A Reader, p. 79-81

Models of war

For the past few weeks, The Ezra Klein Show has been doing episodes about Russia and Ukraine from a variety of perspectives. In the most recent one, Ezra described his approach:

I want to begin today by taking a moment and getting at the theory of how we’re covering Russia’s invasion of Ukraine on the show. There is no way to fully understand an event this vast, where the motivations of the players and the reality on the ground are this unknowable. There’s no one explanation, no one interpretation that can possibly be correct. And if anyone tells you they’ve got that, you should be very skeptical. But even if all models are incomplete, some are useful. And so each episode has been about a different model, a different framework, you can use to understand part of the crisis.

I approve. And I tried my own attempt of a many-model explanation of the conflict in a piece for Quartz a couple weeks back. I tried to balance structural and game-theoretic explanations with historical and personality-driven ones, and to present the outside view as well as the inside one.

For the outside view, I relied on Chris Blattman’s excellent forthcoming book Why We Fight. Here’s the summary:

In Why We Fight, Blattman uses game theory to explain why war does and doesn’t happen. His starting point is that war is rare because it’s expensive. But five factors can overwhelm the incentives for peace:

💪 Unchecked interests. War is more likely when the people in charge don’t pay the price for it. That’s almost always true to an extent, but some leaders are more or less insulated from the costs of conflict.

🎲 Uncertainty. Neither side knows for sure how strong the other is. One side could be bluffing about its strength or resolve, so sometimes the other side calls.

🗝️ Commitment problems. When one side is growing stronger, the other may want to attack before its adversary gets too powerful. The growing power might promise not to attack later on when it’s the dominant power, but that commitment can’t be trusted.

🤔 Misperceptions. Decision makers are overconfident and don’t understand how their adversaries think.

🖤 Intangible incentives. Sometimes people care about things that can’t be bargained for and go beyond costs and benefits—like vengeance, glory, or freedom.

You can read the rest here.

David Leonhardt on logic

On the Josh Barro Very Serious podcast, all about making use of expert knowledge, here’s David Leonhardt of the New York Times:

Don’t go to the nihilist place of ‘Well, there’s no such thing as a fact’, right? And ‘We can all pick our experts on climate change.’ And ‘Maybe it’s happening or maybe it’s not.’ Or, ‘Maybe communism works or maybe it doesn’t…’

I would just tell people: Don’t think that everything is a 50/50 issue. It’s true that there are often [expert] divides but it’s often true that the weight of the evidence often lines up more strongly for one argument than another.

I do think in terms of tips for people who are not journalists or academics, I think logic is an underused tool. And I think too often people are saying ‘Wait is there a peer reviewed study that proves this point?’ And OK if there is we should take that seriously. But listen to the argument that people are making and ask yourself if it made sense. Early on in the pandemic when the CDC and other experts told us not to wear masks, it didn’t make any logical sense. There’s a reason doctors and nurses wear masks in hospitals. There’s a reason why societies in Asia that have been battling contagious viruses a lot recently put a lot of emphasis on masks. Use logic. Ask yourself where does the evidence line up. And recognize that people — all of us — are going to more heavily weight evidence that fits our priors but that every question is not simply a coin flip and that you actually can find useful knowledge. And often logic is your best tool for sorting through who’s full of it and who’s actually saying stuff that makes sense.

Integrative thinking

On the Ezra Klein Show last year, Phil Tetlock (being interviewed by Julia Galef) described how good forecasters integrate multiple perspectives into their own:

JULIA GALEF: So we’ve kind of touched on a few things that made the superforecasters super, but if you had to kind of pick one or two things that really made the superforecasters what they were, what would they be?

PHIL TETLOCK: We’ve already talked about one of them, which is their skill at balancing conflicting arguments, their skill of perspective taking. However, although, but. They put the cognitive breaks on arguments before arguments develop too much momentum. So they’re naturally inclined to think that the truth is going to be some blurry integrative mix of the major arguments that are in the current intellectual environment, as opposed to the truth is going to be way, way out there. Now, of course, if the truth happens to be way, way out there, and we’re on the verge of existential catastrophe, I’m not going to count on them to pick it up.

JULIA GALEF: In addition to these dispositions and sort of general thinking patterns that the superforecasters had, are there any kind of concrete habits that they would always or often make use of when they were trying to make a forecast that other people could adopt to?

PHIL TETLOCK: One of them is this tendency to be integratively complex and qualify your arguments, howevers and buts and all those, a sign that you recognize the legitimacy of competing perspectives. As an intellectual reflex, you’re inclined to do that. And that’s actually a challenge to Festinger and cognitive dissonance. They’re basically saying, look, these people have more tolerance for cognitive dissonance that Leon Festinger realized was possible.

(Emphasis mine.)

Cognitive dissonance is the state of having inconsistent beliefs. Tetlock is saying that good forecasters are more willing than most to have inconsistent beliefs. (In his book Superforecasting he uses the term “consistently inconsistent.”)

How could inconsistency be a good thing? Well, as he says, the integrative mindset tends to think “that the truth is going to be some blurry integrative mix of the major arguments.”

You could imagine two different ways of integrating seemingly disparate arguments or evidence. Say someone shows evidence that raising the minimum wage caused job losses in France (these are made up examples). And someone else showed evidence that a higher minimum wage didn’t lead to any job losses in the U.S. Say you think in both cases the evidence is high quality. How do you integrate those two views?

One way would be to try and think of reasons why they could both be true: What’s difference about France and the U.S. such that the causal arrow might reverse in the two cases? That, I think, is a form of the integrative mindset. You’re trying to logically “integrate” two views into a consistent model of the world.

But the other integrative approach is basically to average the two pieces of evidence: to presume that on average the answer is in the middle, that maybe minimum wage hikes cause modest job losses. That is a “blurry integrative mix,” and it’s not super rigorous. But it often seems to work.

For the rest of the post I want to just quote a couple other descriptions of integrative thinking…

How Politifact, the fact-checking organization, “triangulates the truth”:

PolitiFact items often feature analysis from experts or groups with opposing ideologies, a strategy described internally as “triangulating the truth.” “Seek multiple sources,” an editor told new fact-checkers during a training session. “If you can’t get an independent source on something, go to a conservative and go to a liberal and see where they overlap.” Such “triangulation” is not a matter of artificial balance, the editor argued: the point is to make a decisive ruling by forcing these experts to “focus on the facts.” As noted earlier, fact-checkers cannot claim expertise in the complex areas of public policy their work touches on. But they are confident in their ability to choose the right experts and to distill useful information from political arguments.

Roger Martin, in HBR in 2007, says great leaders are defined by their ability “to hold in their heads two opposing ideas at once.”

And then, without panicking or simply settling for one alternative or the other, they’re able to creative resolve the tension between those two ideas by generating a new one that contains elements of the other but is superior to both. This process of consideration and synthesis can be termed integrative thinking.


Let’s get one thing straight: I am not a “superforecaster.”

Over the past decade, I’ve written about forecasting research and forecasting platforms. And I’ve participated in them as well. In this post I’ll share some of my results to date. Though I’m nowhere near superforecaster level (the top 2% of participants) I’m pleased to have been consistently above average.

Here are my results:

  • Good Judgment Project (~2017): 23 questions, 68th percentile
  • Good Judgment Open (2015-2017): 9 questions, 60th percentile*
  • Good Judgment Open (2021): 4 questions, 76th percentile*
  • Foretell/Infer (2021): 2 questions, 90th percentile
  • Update Aug. 2022: On INFER first half of 2022, 7 questions, 76th percentile; lifetime INFER ranking 87th percentile.

The number of questions is not the number of forecasts: in many cases I made several forecasts over time on the same question. I’ve given percentiles rather than relative Brier scores or other measures because a) they’re more intuitive and b) the GJ Project setup I did was a market (no real money), and so the results were given in terms of total (fake) dollars made and where that scored by percentile. The latter is more comparable to the other scoring systems.

(*) GJP and Infer report percentile scores across an entire season so I used those above. GJ Open doesn’t, best I can tell, so in these cases I’ve averaged my percentile scores for each question, which is a bit different than percentile in total score.

Here’s another view, this one excluding my Good Judgment Project results because I don’t have percentile scores for each question.

For Good Judgment Project, not included in the chart, I “made money” (again: no actual money involved) on 17 of 23 questions, lost money on 4 and was basically even on 2.

Some of my worst scores across all of this involved the 2016 election (including primaries). One of my best involved venture capital. My impression is that, although subject matter knowledge is nice to have, time spent is the major limiting factor. Spending more time and updating forecasts more regularly pays off, even in areas where I’m coming in fairly fresh.

To close out, here is some of my writing about forecasting:

Sociology, history, and epistemology

More than 50 years ago, Quine suggested that epistemology must be “naturalized.” Here is Kwame Anthony Appiah explaining this idea in his book Thinking It Through:

To claim that a belief is justified is not just to say when it will be believed but also to say when it ought to be believed. And we don’t normally think of natural science as telling us what we ought to do. Science, surely, is about describing and explaining the world, not about what we should do?

One way to reconcile these two ideas would be to build on the central idea of reliabilism and say that what psychology can teach us is which belief-forming processes are in fact reliable. So here epistemology and psychology would go hand in hand. Epistemology would tell us that we ought to form our beliefs in ways that are reliable, while psychology examines which ways these are.

p. 74-75

This role for psychology should be familiar to anyone who’s read Thinking Fast and Slow — cognitive biases are rampant and get in the way of accurate belief — or Superforecasting — here are some practices to overcome those limitations — or any number of similar books.

But why stop at psychology?

Belief formation is necessarily social, as I’ve pointed out in a few recent posts. In one I quoted Will Wilkinson:

If you want an unusually high-fidelity mental model of the world, the main thing isn’t probability theory or an encyclopedic knowledge of the heuristics and biases that so often make our reasoning go wrong. It’s learning who to trust. That’s really all there is to it. That’s the ballgame.

In another I quoted Naomi Oreskes:

Feminist philosophers of science, most notably Sandra Harding and Helen Longino, turned that argument on its head, suggest[ed] that objectivity could be reenvisaged as a social accomplishment, something that is collectively achieved.

In one of those posts I unwittingly used the term “social epistemology” to make my point that belief is social; that turns out to be its own philosophical niche. Per Stanford Encyclopedia of Philosophy:

Social epistemology gets its distinctive character by standing in contrast with what might be dubbed “individual” epistemology. Epistemology in general is concerned with how people should go about the business of trying to determine what is true, or what are the facts of the matter, on selected topics. In the case of individual epistemology, the person or agent in question who seeks the truth is a single individual who undertakes the task all by himself/herself, without consulting others. By contrast social epistemology is, in the first instance, an enterprise concerned with how people can best pursue the truth (whichever truth is in question) with the help of, or in the face of, others. It is also concerned with truth acquisition by groups, or collective agents.

The entry is full of all sorts of good topics familiar to anyone who reads about behavioral science: rules for Bayesian reasoning, how to aggregate beliefs in a group, network models of how beliefs spread, when and whether deliberation leads to true belief. But it is all fairly ahistorical.

Compare that to Charles Mills, writing about race, white supremacy, and why epistemology, once naturalized, needs both sociology and history:

[Quine’s work] had opened Pandora’s box. A naturalized epistemology had, perforce, also to be a socialized epistemology; this was ‘a straightforward extension of the naturalistic approach.’ What had originally been a specifically Marxist concept, ‘standpoint theory,’ was adopted and developed to its most sophisticated form in the work of feminist theorists, and it became possible for books with titles like Social Epistemology and Socializing Epistemology, and journals called Social Epistemology, to be published and seen as a legitimate part of philosophy. The Marxist challenge thrown down a century before could finally be taken up…

A central theme of the epistemology of the past few decades has been the discrediting of the idea of a raw perceptual ‘given’ completely unmediated by concepts… In most cases the concepts will not be neutral but oriented toward a certain understanding, embedded in sub-theories and larger theories about how things work.

In the orthodox left tradition, this set of issues is handled through the category of ‘ideology’; in more recent radical theory, through Foucault’s ‘discourses.’ But whatever one’s larger meta-theoretical sympathies, whatever approach one thinks best for investigating these ideational matters, such concerns obviously need to be part of a social epistemology. For if the society is one structured by relations of domination and subordination (as of course all societies in human history past the hunting-and-gathering stage have been) then in certain areas this conceptual apparatus is likely to be negatively shaped and inflected in various ways by the biases of the ruling group(s).

Black Rights / White Wrongs p. 60-63

Crucially, Mills characterizes this kind of bias as “ignorance” in part because it has “the virtue of signaling my theoretical sympathies with what I know will seem to many a deplorably old-fashioned ‘conservative’ realist intellectual framework, one in which truth, falsity, facts, reality, and so forth are not enclosed with ironic scare-quotes.” The history and sociology of race (like class or gender) help explain not just why people believe what they do but also why people reach incorrect beliefs.

That view is in contrast with some other sociological programs, as the Stanford entry on social epistemology notes:

A movement somewhat analogous to social epistemology was developed in the middle part of the 20th century, in which sociologists and deconstructionists set out to debunk orthodox epistemology, sometimes challenging the very possibility of truth, rationality, factuality, and/or other presumed desiderata of mainstream epistemology. Members of the “strong program” in the sociology of science, such as Bruno Latour and Steve Woolgar (1986), challenged the notions of objective truth and factuality, arguing that so-called “facts” are not discovered or revealed by science, but instead “constructed”, “constituted”, or “fabricated”. “There is no object beyond discourse,” they wrote. “The organization of discourse is the object” (1986: 73).

A similar version of postmodernism was offered by the philosopher Richard Rorty (1979). Rorty rejected the traditional conception of knowledge as “accuracy of representation” and sought to replace it with a notion of “social justification of belief”. As he expressed it, there is no such thing as a classical “objective truth”. The closest thing to (so called) truth is merely the practice of “keeping the conversation going” (1979: 377).

But as Oreskes argues in her defense of science as a social practice, the recognition that knowledge is fundamentally social doesn’t require a belief in relativism.

A naturalized epistemology requires, in Appiah’s words, a search for “belief-forming processes [that] are in fact reliable.” That requires the study of how belief formation works at the group level–including an appreciation of history and sociology. To overcome our biases we need to consider the specific society within which we are trying to find the truth, and the injustices that pervade it.

A short definition of power

From Power for All, by Julie Battilana and Tiziana Casciaro:

There are two common threads across these definitions [of power across the social sciences]. The first is that the authors view power as the ability of a person or a group of people to produce an effect on others–that is, to influence their behaviors. This influence can be exercised in different ways, which has led social scientist to distinguish between different forms of power. As summarized by the sociologist Manuel Castells, “Power is exercised by means of coercion (the monopoly on violence, legitimate or not, by the state) and/or by the construction of meaning in people’s minds through mechanisms of cultural production and distribution.” Therefore, two broad categories underpin the types of power identified in the literature. The first category encompasses persuasion-based types of powre, such as expert power that stems from trusting someone’s know-how, referent power that stems from admiration for or identification with someone, or power stemming from control over cultural norms. The other category comprises coercion-based types of power that include the use of force (be it physically violent or not) and authority (or “legitimate power”) to influence people’s behaviors. Building on this large and rich body of work, we define power as the ability to influence another person or group’s behavior, be it through persuasion or coercion.

The second common thread is that thye all, implicitly or explicitly, posit that power is a function of one actor’s dependence on another. Social exchange theory articulates this view clearly in the seminal model of power-dependence relations developed by sociologist Richard Emerson. In this view, power is the inverse of dependence. The power of Actor A over Actor B is the extent to which Actor B is dependent on Actor A. The dependence oof Actor B on Actor A is “directly proportional to B’s motivational investment in goals mediated by A and inversely proportional to the availability of those goals to B outside of the A-B relation.” The fundamentals of power that we present in this book are derived from this conceptualization of powre. They posit that the power of Actor A over Actor B depends on the extent to which A controls access over resoures that B values and that, in turn, the power of Actor B over Actor A depends on the extent to which B controls access over resources that A values. It follows from the fundamentals of power that power is always relational and that it is not a zero-sum game. The power relationship between A and B may be balanced if A and B are mutually dependent and they each value the resources that the other party has access to. It is imbalanced if one of the parties needs the resources that the other party can provide more.

Importantly the resources that each of the parties value may be psychological as well as material…

Cultural norms shape what is valued in a given context, while the distribution of resources favors some people and organizations and disadvantages others…

p. 200-201 (Appendix); emphasis added.

And strategies for shifting power:

Here’s the book. Here’s a summary from Charter. Here’s a past post of mine quoting Battilana’s work.

Governance, growth, and equity

When I studied environmental issues, I was taught three lenses through which to understand them:

  • The neo-Malthusians emphasized resource scarcity, natural limits, and scientific management. Most conservationists and environmental scientists fit this perspective.
  • The Cornucopians emphasized markets, technology, and humanity’s ability to invent its way out of shortages. Their ranks include lots of economists and Silicon Valley-types; the Simon-Ehrlich wager is a canonical example of the cornucopian viewpoint prevailing.
  • Finally, Political Ecologists emphasized environmental justice. They asked who would be harmed by pollution and by policies to limit it, and whose voices were left out of decision-making.

These three perspectives served me well, including when my job involved following climate policy closely. And lately I have been thinking about something similar to understand the governance and ethics of technology. I will call these three perspectives: Governance, growth, and equity.

  • The governance perspective emphasizes better management and regulation of existing technologies, new or old. It tries to maximize benefits and minimize harms. Lawrence Lessig’s book Code 2.0 is, to me, a definitive articulation of the governance view: Lessig took the idea of code and conceived of it as one more way to regulate. Laws, norms, and markets all regulate our lives, constraining and enabling behavior. Code was a form of “architecture”–how something is “designed” or “built”–and architecture was the fourth form of regulation. Lessig was interested in both how we would regulate behavior on the internet and how the code itself would regulate us.
  • The growth perspective emphasizes technology’s ability to raise living standards over time by solving new problems and making humans more productive. This perspective asks: Are we investing enough in new ideas and technologies and are we properly incentivizing those things given the market failures that surround them? The best articulations of this view, in my opinion, come from the economics of ideas, innovation, and new growth theory. For a comprehensive view on the benefits of technology, see Robert Gordon’s The Rise and Fall of American Growth.
  • Finally, the equity view asks who bears the harms of new technologies and whose voices are left out of building and regulating tech. It emphasizes inclusion and democracy and is skeptical of concentrated power. Meredith Whitaker’s 2021 essay “The steep cost of capture,” on the dangers of concentrating AI research in a few large, for-profit firms, captures this view. So does Sheila Jasanoff’s essay “‘Preparedness’ Won’t Stop the Next Pandemic.” They question the assumptions of the growth and governance views, and reorient the discussion to consideration of power.

The lesson of many-model thinking is that combining different mental models of a problem or phenomenon often yields better results than trying to just pick the best model. In the context of environmental issues, many people are drawn to one of the three perspectives I described, and maybe have at least some sympathy for a second. But many of them are skeptical or even dismissive of at least one, and so are at risk for missing something important about those issues.

One starting point for thinking through tech ethics is to consider the issue from all three perspectives. Take the harms of social media.

  • The governance view would emphasize laws and regulations that could constrain social media companies and management practices those companies could use to minimize harm. The governance view recommends things like creating a fiduciary duty for platforms, auditing algorithms, and enforcing antitrust law.
  • The growth view emphasizes technical fixes–like better algorithmic content moderation–that can mitigate harm. And it points out the importance of new entrants and competition to speed along those improvements.
  • The equity view questions whether companies and an industry that are not at all representative of the societies they serve can ever operate responsibly. It suggests more fundamentally decentralized, participatory platforms, owned and managed as a commons. Its interest in antitrust goes beyond lowering prices to raise a more radical critique of concentrated economic power. And it insists that marginalized communities be included in discussions of how to improve social media.

On any given issue, one of these perspectives might be stronger or weaker. I don’t find the growth view all that helpful when it comes to social media, for example–but in a discussion of handling pandemics or governing AI I believe the growth view would have a lot to add.

The trick for those thinking about the ethics of tech is to consider all three perspectives and then use judgment to strike the right balance between them. But the starting point should be considering the concerns of governance, of growth, and of equity.

“Thin” and “thick” causality

Kathryn Paige Harden’s book The Genetic Lottery: Why DNA Matters for Social Equality includes a really nice primer on causality, including a distinction between “thin” and thick” versions of it. The book is about genetics, but that’s not what I’m interested in this post; more about the book here and here. Here are some excerpts of her treatment of causality:

Causes and Counterfactuals

In 1748, the Scottish philosopher David Hume offered a definition of “cause” that was actually two definitions in one:

“We may define a cause to be an object, followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words, where, if the first object had not been, the second never had existed.”

The first half of Hume’s definition is about regularity–if you see one thing, do you always see a certain other thing? If I flick the light switch, the lights regularly, and almost without exception, come on…

Regularity accounts of causality occupied philosophers’ attention for the next two centuries, while the second half of Hume’s definition–where if the first object had not been, the second had never existed–was relatively neglected. Only in the 1970s did the philosopher David Lewis formulate a definition of cause that more closely resembled the second half of Hume’s definition. Lewis described a cause as “Something that makes a difference, and the difference it makes must be a difference from what would have happened without it.”

Lewis’s definition of a cause is all about the counterfactual–X happened, but what if X had not happened?…

[Saying that X causes Y] does not imply that researchers know the mechanism for how this works…

Each of these mechanistic stories could be decomposed into a set of sub-mechanisms, a matryoshka doll of “How?”…

But understanding mechanism is a separable set of scientific activities from those activities that establish causation…

p. 99-104

She goes on to describe a concept of “portability” that then ties into the problem of generalizability:

The portability of a cause can be limited or unknown… The developmental psychologist Urie Bronfenbrenner referred to the “bioecological” context of people’s lives. Everyone is embedded in concentric circles of context… I find Bronfenbrenner’s bioecological model to be a helpful framework for thinking about the portability of causes of human behavior: Which of these circles would have to change, and by how much, in order for the causal claim to no longer be true? Here, knowing about the mechanism also helps knowing about portability, as a good understanding of mechanism allows one to predict how cause-effect relationships will play out even in conditions that have never been observed.

p. 106-107

Finally she distinguishes between “thin” and “thick” causal explanations:

In the course of ordinary social science and medicine, we are quite comfortable calling something a cause, even when (a) we don’t understand the mechanisms by which the cause exerts its effects, (b) the cause is probabilistically but not deterministically associated with effects, and (c) the cause is of uncertain portability across time and space. “All” that is required to assert that you have identified a cause is to demonstrate evidence that the average outcome for a group of people would have been different if they had experienced X instead of Not-X…

I’m going to call this the “thin” model of causation.

We can contrast the “thin” model of causation with the type of “thick” causation we see in monogenic genetic disorders or chromosomal abnormalities. Take Down’s syndrome, for instance. Down’s syndrome is defined by a single, deterministic, portable cause… And this causal relationship operates as a “law of nature,” in the sense that we expect the trisomy-Down’s relationship to operate more or less in the same way, regardless of the social milieu into which an individual is born.

p. 108