Integrative thinking

On the Ezra Klein Show last year, Phil Tetlock (being interviewed by Julia Galef) described how good forecasters integrate multiple perspectives into their own:

JULIA GALEF: So we’ve kind of touched on a few things that made the superforecasters super, but if you had to kind of pick one or two things that really made the superforecasters what they were, what would they be?

PHIL TETLOCK: We’ve already talked about one of them, which is their skill at balancing conflicting arguments, their skill of perspective taking. However, although, but. They put the cognitive breaks on arguments before arguments develop too much momentum. So they’re naturally inclined to think that the truth is going to be some blurry integrative mix of the major arguments that are in the current intellectual environment, as opposed to the truth is going to be way, way out there. Now, of course, if the truth happens to be way, way out there, and we’re on the verge of existential catastrophe, I’m not going to count on them to pick it up.

JULIA GALEF: In addition to these dispositions and sort of general thinking patterns that the superforecasters had, are there any kind of concrete habits that they would always or often make use of when they were trying to make a forecast that other people could adopt to?

PHIL TETLOCK: One of them is this tendency to be integratively complex and qualify your arguments, howevers and buts and all those, a sign that you recognize the legitimacy of competing perspectives. As an intellectual reflex, you’re inclined to do that. And that’s actually a challenge to Festinger and cognitive dissonance. They’re basically saying, look, these people have more tolerance for cognitive dissonance that Leon Festinger realized was possible.

(Emphasis mine.)

Cognitive dissonance is the state of having inconsistent beliefs. Tetlock is saying that good forecasters are more willing than most to have inconsistent beliefs. (In his book Superforecasting he uses the term “consistently inconsistent.”)

How could inconsistency be a good thing? Well, as he says, the integrative mindset tends to think “that the truth is going to be some blurry integrative mix of the major arguments.”

You could imagine two different ways of integrating seemingly disparate arguments or evidence. Say someone shows evidence that raising the minimum wage caused job losses in France (these are made up examples). And someone else showed evidence that a higher minimum wage didn’t lead to any job losses in the U.S. Say you think in both cases the evidence is high quality. How do you integrate those two views?

One way would be to try and think of reasons why they could both be true: What’s difference about France and the U.S. such that the causal arrow might reverse in the two cases? That, I think, is a form of the integrative mindset. You’re trying to logically “integrate” two views into a consistent model of the world.

But the other integrative approach is basically to average the two pieces of evidence: to presume that on average the answer is in the middle, that maybe minimum wage hikes cause modest job losses. That is a “blurry integrative mix,” and it’s not super rigorous. But it often seems to work.

For the rest of the post I want to just quote a couple other descriptions of integrative thinking…

How Politifact, the fact-checking organization, “triangulates the truth”:

PolitiFact items often feature analysis from experts or groups with opposing ideologies, a strategy described internally as “triangulating the truth.” “Seek multiple sources,” an editor told new fact-checkers during a training session. “If you can’t get an independent source on something, go to a conservative and go to a liberal and see where they overlap.” Such “triangulation” is not a matter of artificial balance, the editor argued: the point is to make a decisive ruling by forcing these experts to “focus on the facts.” As noted earlier, fact-checkers cannot claim expertise in the complex areas of public policy their work touches on. But they are confident in their ability to choose the right experts and to distill useful information from political arguments.

Roger Martin, in HBR in 2007, says great leaders are defined by their ability “to hold in their heads two opposing ideas at once.”

And then, without panicking or simply settling for one alternative or the other, they’re able to creative resolve the tension between those two ideas by generating a new one that contains elements of the other but is superior to both. This process of consideration and synthesis can be termed integrative thinking.

Forecasting

Let’s get one thing straight: I am not a “superforecaster.”

Over the past decade, I’ve written about forecasting research and forecasting platforms. And I’ve participated in them as well. In this post I’ll share some of my results to date. Though I’m nowhere near superforecaster level (the top 2% of participants) I’m pleased to have been consistently above average.

Here are my results:

  • Good Judgment Project (~2017): 23 questions, 68th percentile
  • Good Judgment Open (2015-2017): 9 questions, 60th percentile*
  • Good Judgment Open (2021): 4 questions, 76th percentile*
  • Foretell/Infer (2021): 2 questions, 90th percentile

The number of questions is not the number of forecasts: in many cases I made several forecasts over time on the same question. I’ve given percentiles rather than relative Brier scores or other measures because a) they’re more intuitive and b) the GJ Project setup I did was a market (no real money), and so the results were given in terms of total (fake) dollars made and where that scored by percentile. The latter is more comparable to the other scoring systems.

(*) GJP and Infer report percentile scores across an entire season so I used those above. GJ Open doesn’t, best I can tell, so in these cases I’ve averaged my percentile scores for each question, which is a bit different than percentile in total score.

Here’s another view, this one excluding my Good Judgment Project results because I don’t have percentile scores for each question.

For Good Judgment Project, not included in the chart, I “made money” (again: no actual money involved) on 17 of 23 questions, lost money on 4 and was basically even on 2.

Some of my worst scores across all of this involved the 2016 election (including primaries). One of my best involved venture capital. My impression is that, although subject matter knowledge is nice to have, time spent is the major limiting factor. Spending more time and updating forecasts more regularly pays off, even in areas where I’m coming in fairly fresh.

To close out, here is some of my writing about forecasting:

Sociology, history, and epistemology

More than 50 years ago, Quine suggested that epistemology must be “naturalized.” Here is Kwame Anthony Appiah explaining this idea in his book Thinking It Through:

To claim that a belief is justified is not just to say when it will be believed but also to say when it ought to be believed. And we don’t normally think of natural science as telling us what we ought to do. Science, surely, is about describing and explaining the world, not about what we should do?

One way to reconcile these two ideas would be to build on the central idea of reliabilism and say that what psychology can teach us is which belief-forming processes are in fact reliable. So here epistemology and psychology would go hand in hand. Epistemology would tell us that we ought to form our beliefs in ways that are reliable, while psychology examines which ways these are.

p. 74-75

This role for psychology should be familiar to anyone who’s read Thinking Fast and Slow — cognitive biases are rampant and get in the way of accurate belief — or Superforecasting — here are some practices to overcome those limitations — or any number of similar books.

But why stop at psychology?

Belief formation is necessarily social, as I’ve pointed out in a few recent posts. In one I quoted Will Wilkinson:

If you want an unusually high-fidelity mental model of the world, the main thing isn’t probability theory or an encyclopedic knowledge of the heuristics and biases that so often make our reasoning go wrong. It’s learning who to trust. That’s really all there is to it. That’s the ballgame.

In another I quoted Naomi Oreskes:

Feminist philosophers of science, most notably Sandra Harding and Helen Longino, turned that argument on its head, suggest[ed] that objectivity could be reenvisaged as a social accomplishment, something that is collectively achieved.

In one of those posts I unwittingly used the term “social epistemology” to make my point that belief is social; that turns out to be its own philosophical niche. Per Stanford Encyclopedia of Philosophy:

Social epistemology gets its distinctive character by standing in contrast with what might be dubbed “individual” epistemology. Epistemology in general is concerned with how people should go about the business of trying to determine what is true, or what are the facts of the matter, on selected topics. In the case of individual epistemology, the person or agent in question who seeks the truth is a single individual who undertakes the task all by himself/herself, without consulting others. By contrast social epistemology is, in the first instance, an enterprise concerned with how people can best pursue the truth (whichever truth is in question) with the help of, or in the face of, others. It is also concerned with truth acquisition by groups, or collective agents.

The entry is full of all sorts of good topics familiar to anyone who reads about behavioral science: rules for Bayesian reasoning, how to aggregate beliefs in a group, network models of how beliefs spread, when and whether deliberation leads to true belief. But it is all fairly ahistorical.

Compare that to Charles Mills, writing about race, white supremacy, and why epistemology, once naturalized, needs both sociology and history:

[Quine’s work] had opened Pandora’s box. A naturalized epistemology had, perforce, also to be a socialized epistemology; this was ‘a straightforward extension of the naturalistic approach.’ What had originally been a specifically Marxist concept, ‘standpoint theory,’ was adopted and developed to its most sophisticated form in the work of feminist theorists, and it became possible for books with titles like Social Epistemology and Socializing Epistemology, and journals called Social Epistemology, to be published and seen as a legitimate part of philosophy. The Marxist challenge thrown down a century before could finally be taken up…

A central theme of the epistemology of the past few decades has been the discrediting of the idea of a raw perceptual ‘given’ completely unmediated by concepts… In most cases the concepts will not be neutral but oriented toward a certain understanding, embedded in sub-theories and larger theories about how things work.

In the orthodox left tradition, this set of issues is handled through the category of ‘ideology’; in more recent radical theory, through Foucault’s ‘discourses.’ But whatever one’s larger meta-theoretical sympathies, whatever approach one thinks best for investigating these ideational matters, such concerns obviously need to be part of a social epistemology. For if the society is one structured by relations of domination and subordination (as of course all societies in human history past the hunting-and-gathering stage have been) then in certain areas this conceptual apparatus is likely to be negatively shaped and inflected in various ways by the biases of the ruling group(s).

Black Rights / White Wrongs p. 60-63

Crucially, Mills characterizes this kind of bias as “ignorance” in part because it has “the virtue of signaling my theoretical sympathies with what I know will seem to many a deplorably old-fashioned ‘conservative’ realist intellectual framework, one in which truth, falsity, facts, reality, and so forth are not enclosed with ironic scare-quotes.” The history and sociology of race (like class or gender) help explain not just why people believe what they do but also why people reach incorrect beliefs.

That view is in contrast with some other sociological programs, as the Stanford entry on social epistemology notes:

A movement somewhat analogous to social epistemology was developed in the middle part of the 20th century, in which sociologists and deconstructionists set out to debunk orthodox epistemology, sometimes challenging the very possibility of truth, rationality, factuality, and/or other presumed desiderata of mainstream epistemology. Members of the “strong program” in the sociology of science, such as Bruno Latour and Steve Woolgar (1986), challenged the notions of objective truth and factuality, arguing that so-called “facts” are not discovered or revealed by science, but instead “constructed”, “constituted”, or “fabricated”. “There is no object beyond discourse,” they wrote. “The organization of discourse is the object” (1986: 73).

A similar version of postmodernism was offered by the philosopher Richard Rorty (1979). Rorty rejected the traditional conception of knowledge as “accuracy of representation” and sought to replace it with a notion of “social justification of belief”. As he expressed it, there is no such thing as a classical “objective truth”. The closest thing to (so called) truth is merely the practice of “keeping the conversation going” (1979: 377).

But as Oreskes argues in her defense of science as a social practice, the recognition that knowledge is fundamentally social doesn’t require a belief in relativism.

A naturalized epistemology requires, in Appiah’s words, a search for “belief-forming processes [that] are in fact reliable.” That requires the study of how belief formation works at the group level–including an appreciation of history and sociology. To overcome our biases we need to consider the specific society within which we are trying to find the truth, and the injustices that pervade it.

A short definition of power

From Power for All, by Julie Battilana and Tiziana Casciaro:

There are two common threads across these definitions [of power across the social sciences]. The first is that the authors view power as the ability of a person or a group of people to produce an effect on others–that is, to influence their behaviors. This influence can be exercised in different ways, which has led social scientist to distinguish between different forms of power. As summarized by the sociologist Manuel Castells, “Power is exercised by means of coercion (the monopoly on violence, legitimate or not, by the state) and/or by the construction of meaning in people’s minds through mechanisms of cultural production and distribution.” Therefore, two broad categories underpin the types of power identified in the literature. The first category encompasses persuasion-based types of powre, such as expert power that stems from trusting someone’s know-how, referent power that stems from admiration for or identification with someone, or power stemming from control over cultural norms. The other category comprises coercion-based types of power that include the use of force (be it physically violent or not) and authority (or “legitimate power”) to influence people’s behaviors. Building on this large and rich body of work, we define power as the ability to influence another person or group’s behavior, be it through persuasion or coercion.

The second common thread is that thye all, implicitly or explicitly, posit that power is a function of one actor’s dependence on another. Social exchange theory articulates this view clearly in the seminal model of power-dependence relations developed by sociologist Richard Emerson. In this view, power is the inverse of dependence. The power of Actor A over Actor B is the extent to which Actor B is dependent on Actor A. The dependence oof Actor B on Actor A is “directly proportional to B’s motivational investment in goals mediated by A and inversely proportional to the availability of those goals to B outside of the A-B relation.” The fundamentals of power that we present in this book are derived from this conceptualization of powre. They posit that the power of Actor A over Actor B depends on the extent to which A controls access over resoures that B values and that, in turn, the power of Actor B over Actor A depends on the extent to which B controls access over resources that A values. It follows from the fundamentals of power that power is always relational and that it is not a zero-sum game. The power relationship between A and B may be balanced if A and B are mutually dependent and they each value the resources that the other party has access to. It is imbalanced if one of the parties needs the resources that the other party can provide more.

Importantly the resources that each of the parties value may be psychological as well as material…

Cultural norms shape what is valued in a given context, while the distribution of resources favors some people and organizations and disadvantages others…

p. 200-201 (Appendix); emphasis added.

And strategies for shifting power:

Here’s the book. Here’s a summary from Charter. Here’s a past post of mine quoting Battilana’s work.

Governance, growth, and equity

When I studied environmental issues, I was taught three lenses through which to understand them:

  • The neo-Malthusians emphasized resource scarcity, natural limits, and scientific management. Most conservationists and environmental scientists fit this perspective.
  • The Cornucopians emphasized markets, technology, and humanity’s ability to invent its way out of shortages. Their ranks include lots of economists and Silicon Valley-types; the Simon-Ehrlich wager is a canonical example of the cornucopian viewpoint prevailing.
  • Finally, Political Ecologists emphasized environmental justice. They asked who would be harmed by pollution and by policies to limit it, and whose voices were left out of decision-making.

These three perspectives served me well, including when my job involved following climate policy closely. And lately I have been thinking about something similar to understand the governance and ethics of technology. I will call these three perspectives: Governance, growth, and equity.

  • The governance perspective emphasizes better management and regulation of existing technologies, new or old. It tries to maximize benefits and minimize harms. Lawrence Lessig’s book Code 2.0 is, to me, a definitive articulation of the governance view: Lessig took the idea of code and conceived of it as one more way to regulate. Laws, norms, and markets all regulate our lives, constraining and enabling behavior. Code was a form of “architecture”–how something is “designed” or “built”–and architecture was the fourth form of regulation. Lessig was interested in both how we would regulate behavior on the internet and how the code itself would regulate us.
  • The growth perspective emphasizes technology’s ability to raise living standards over time by solving new problems and making humans more productive. This perspective asks: Are we investing enough in new ideas and technologies and are we properly incentivizing those things given the market failures that surround them? The best articulations of this view, in my opinion, come from the economics of ideas, innovation, and new growth theory. For a comprehensive view on the benefits of technology, see Robert Gordon’s The Rise and Fall of American Growth.
  • Finally, the equity view asks who bears the harms of new technologies and whose voices are left out of building and regulating tech. It emphasizes inclusion and democracy and is skeptical of concentrated power. Meredith Whitaker’s 2021 essay “The steep cost of capture,” on the dangers of concentrating AI research in a few large, for-profit firms, captures this view. So does Sheila Jasanoff’s essay “‘Preparedness’ Won’t Stop the Next Pandemic.” They question the assumptions of the growth and governance views, and reorient the discussion to consideration of power.

The lesson of many-model thinking is that combining different mental models of a problem or phenomenon often yields better results than trying to just pick the best model. In the context of environmental issues, many people are drawn to one of the three perspectives I described, and maybe have at least some sympathy for a second. But many of them are skeptical or even dismissive of at least one, and so are at risk for missing something important about those issues.

One starting point for thinking through tech ethics is to consider the issue from all three perspectives. Take the harms of social media.

  • The governance view would emphasize laws and regulations that could constrain social media companies and management practices those companies could use to minimize harm. The governance view recommends things like creating a fiduciary duty for platforms, auditing algorithms, and enforcing antitrust law.
  • The growth view emphasizes technical fixes–like better algorithmic content moderation–that can mitigate harm. And it points out the importance of new entrants and competition to speed along those improvements.
  • The equity view questions whether companies and an industry that are not at all representative of the societies they serve can ever operate responsibly. It suggests more fundamentally decentralized, participatory platforms, owned and managed as a commons. Its interest in antitrust goes beyond lowering prices to raise a more radical critique of concentrated economic power. And it insists that marginalized communities be included in discussions of how to improve social media.

On any given issue, one of these perspectives might be stronger or weaker. I don’t find the growth view all that helpful when it comes to social media, for example–but in a discussion of handling pandemics or governing AI I believe the growth view would have a lot to add.

The trick for those thinking about the ethics of tech is to consider all three perspectives and then use judgment to strike the right balance between them. But the starting point should be considering the concerns of governance, of growth, and of equity.

“Thin” and “thick” causality

Kathryn Paige Harden’s book The Genetic Lottery: Why DNA Matters for Social Equality includes a really nice primer on causality, including a distinction between “thin” and thick” versions of it. The book is about genetics, but that’s not what I’m interested in this post; more about the book here and here. Here are some excerpts of her treatment of causality:

Causes and Counterfactuals

In 1748, the Scottish philosopher David Hume offered a definition of “cause” that was actually two definitions in one:

“We may define a cause to be an object, followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words, where, if the first object had not been, the second never had existed.”

The first half of Hume’s definition is about regularity–if you see one thing, do you always see a certain other thing? If I flick the light switch, the lights regularly, and almost without exception, come on…

Regularity accounts of causality occupied philosophers’ attention for the next two centuries, while the second half of Hume’s definition–where if the first object had not been, the second had never existed–was relatively neglected. Only in the 1970s did the philosopher David Lewis formulate a definition of cause that more closely resembled the second half of Hume’s definition. Lewis described a cause as “Something that makes a difference, and the difference it makes must be a difference from what would have happened without it.”

Lewis’s definition of a cause is all about the counterfactual–X happened, but what if X had not happened?…

[Saying that X causes Y] does not imply that researchers know the mechanism for how this works…

Each of these mechanistic stories could be decomposed into a set of sub-mechanisms, a matryoshka doll of “How?”…

But understanding mechanism is a separable set of scientific activities from those activities that establish causation…

p. 99-104

She goes on to describe a concept of “portability” that then ties into the problem of generalizability:

The portability of a cause can be limited or unknown… The developmental psychologist Urie Bronfenbrenner referred to the “bioecological” context of people’s lives. Everyone is embedded in concentric circles of context… I find Bronfenbrenner’s bioecological model to be a helpful framework for thinking about the portability of causes of human behavior: Which of these circles would have to change, and by how much, in order for the causal claim to no longer be true? Here, knowing about the mechanism also helps knowing about portability, as a good understanding of mechanism allows one to predict how cause-effect relationships will play out even in conditions that have never been observed.

p. 106-107

Finally she distinguishes between “thin” and “thick” causal explanations:

In the course of ordinary social science and medicine, we are quite comfortable calling something a cause, even when (a) we don’t understand the mechanisms by which the cause exerts its effects, (b) the cause is probabilistically but not deterministically associated with effects, and (c) the cause is of uncertain portability across time and space. “All” that is required to assert that you have identified a cause is to demonstrate evidence that the average outcome for a group of people would have been different if they had experienced X instead of Not-X…

I’m going to call this the “thin” model of causation.

We can contrast the “thin” model of causation with the type of “thick” causation we see in monogenic genetic disorders or chromosomal abnormalities. Take Down’s syndrome, for instance. Down’s syndrome is defined by a single, deterministic, portable cause… And this causal relationship operates as a “law of nature,” in the sense that we expect the trisomy-Down’s relationship to operate more or less in the same way, regardless of the social milieu into which an individual is born.

p. 108

Prediction, preparation, and humility

Sheila Jasanoff of Harvard has a really interesting essay in Boston Review titled “‘Preparedness’ Won’t Stop the Next Pandemic.” The whole thing is worth a read, but here’s the gist:

Humility, by contrast, admits that defeat is possible. It occupies the nebulous zone between preparedness and precaution by asking a moral question: not what we can achieve with what we have, but how we should act given that we cannot know the full consequences of our actions. Thought of in this way, humility addresses the questions perennially raised by critics of precaution and refutes the charges of passivity. Confronted on many fronts by riddles too knotty to solve, must society choose either to do nothing or to move aggressively forward as if risks don’t matter and resources are limitless? Decades of effort to protect human health and the environment suggest that the choice is not so stark or binary.

There is a middle way, the way of humility, that permits steps to be taken here and now in order to forestall worst-case scenarios later. It implements precaution by unheroic but also more ethical means, through what I call technologies of humility: institutional mechanisms—including greater citizen participation—for incorporating memory, experience, and concerns for justice into our schemes of governance and public policy. This is a proactive, historically informed, and analytically robust method that asks not just what we can do but who might get hurt, what happened when we tried before, whose perceptions were systematically ignored, and what protections are in place if we again guess wrong.

There are some responses to the essay here, which I’ve not yet read.

Notes on science

I’ve been reading and writing about the philosophy of science a bunch in the last couple of years, so this post is a place to clip together a number of quotes and posts in one place.

Michael Strevens says the scientific method boils down to the “iron rule of explanation” that “only empirical evidence counts.”

This is a very stripped down idea. It allows for subjectivity, and it grants that there is no logically or philosophically satisfying way to decide how to interpret the results of observation or experimentation.

Here, then, in short, is the iron rule:

1. Strive to settle all arguments by empirical testing.

2. To conduct an empirical test to decide between a pair of hypotheses, perform an experiment or measurement, one of whose possible outcomes can be explained by one hypothesis (and accompanying cohort) but not the other…

How can a rule so scant in content and so limited in scope account for science’s powers of discovery? It may dictate what gets called evidence, but it makes no attempt to forge agreement among scientists as to what the evidence says. It simply lays down the rule that all arguments must be carried out with reference to empirical evidence and then steps back, relinquishing control. Scientists are free to think almost anything they like about the connection between evidence and theory. But if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with.

My posts on it are here and here. Here’s the book and here is an essay version in Aeon.

Naomi Oreskes says science must be understood as social practices–and that this is a reason to trust it, not dismiss it

There is now broad agreement among historians, philosophers, sociologists, and anthropologists of science that there is no (singular) scientific method, and that scientific practice consists of communities of people, making decisions for reasons that are both empirical and social, using diverse methods. But this leaves us with the question: If scientists are just people doing work, like plumbers or nurses or electricians, and if our scientific theories are fallible and subject to change, then what is the basis for trust in science?

I suggest that our answer should be two-fold: 1) its sustained engagement with the world and 2) its social character

My post is here and the book is here.

Four commonalities in scientific practice

From UPenn’s short Coursera course on the philosophy of science which is a nice overview:

Science is not completely unified and that there is no master method or recipe that’s appropriate in all contexts. Nevertheless, there are certain elements common of these examples… So what are the commonalities? There at least four major ones first all four of our examples involve a sophisticated forms of observation… Second, simple observation wasn’t enough… [experimentation and simulation were used as well.] Third, in each case it was multiple lines of evidence generated using different experimental and observational techniques that convinced the scientific community of the relevant results. Simplistic pictures of science such as those that are taught in high school make it seem like scientific research miraculously uncovers the truth by simply verifying one hypothesis with a single experiment. While this does happen occasionally research more often looks like the cases I’ve talked about. Research done by multiple people using different approaches that point in the same direction. Or they don’t sometimes like in the case of children’s beliefs. Philosophers call this robustness or consilience. Fourth and finally all of our examples involve the accumulation of evidence over time. Each case involves scientific understanding that improves over time from an initial sense that the answer is at hand to greater accuracy and precision and measurements and a much greater appreciation of what is genuinely needed to explain a phenomenon. While scientists never achieve certainty, this is reserved for logic and Mathematics. The accumulation of evidence especially from multiple independent sources is the key to increasing confidence that a hypothesis is true.

Tim Lewens defends scientific realism

Scientific realism is the label for the philosophical view that science is in the truth business. Scientific realism says that the sciences represent those parts of the world they deal with in an increasingly accurate way as time goes by. Scientific realists are not committed to the greedy idea that the sciences can tell us all there is to know about everything; they can happily acknowledge that there is plenty to learn from the arts and humanities. Moreover, by denying that science gives us a perfectly accurate picture of the world, scientific realists are not committed to the manifestly absurd idea that science is finished..

A moment’s reflection suggests that scientific realism is not the only sensible and respectful way to respond to the successes of science. Perhaps we should think of scientific theories in the way we think of hammers, or computers: they are remarkably useful, but like hammers or computers they are mere tools. It makes no senses to ask whether a hammer is true, or whether it accurately represents the world, and one might argue that the same goes for science: we should simply ask whether its theories are fit for their purposes…

Cutting to the chase, this chapter will argue in favor of scientific realism… First, we need to fend off… the argument from “underdetermination… [which] suggests that scientific evidence is never powerful enough to discriminate between wholly different theories about the underlying nature of the universe… Second, we need to ask whether there is any positive argument in favor of scientific realism. More or less the only argument that has ever been offered to support this view is known as the “No Miracles argument.” The basic gist of this argument is that if science were not true–if it made significant mistakes about the constituents of matter, for example–then when we acted on the basis of scientific theory, our plans would consistently go awry… Third, and finally, we must confront an argument known as the “Pessimistic Induction.” This argument draws on the historical record to suggest that theories we now think of as false have nonetheless been responsible for remarkable practical successes.”

Why Trust Science p. 85-88

The book is more a quick tour through the philosophy of science, and Lewens’ argument for realism was something of a detour.

Rorty says science is a tool and urges not to think of it purely with examples from physics

In [McDowell’s] picture, people like Quine (and sometimes even Sellars) are so impressed with natural science that they think that the first sort of intelligibility [associated with natural science rather than reason] is the only genuine sort.

I think it is important, when discussing the achievements of the scientific revolution, to make a distinction which McDowell does not make: a distinction between particle physics, together with those microstructural parts of natural science which can easily be linked up with particle physics, and all the rest of natural science. Particle physics, unfortunately, fascinates many contemporary philosophers, just as corpuscularian mechanics fascinated John Locke…

To guard against this simpleminded and reductionistic way of thinking of non-human nature, it is useful to remember that the form of intelligibility shared by Newton’s primitive corpuscularianism and contemporary particle physics has no counterpart in, for example, the geology of plate tectonics, nor in Darwin’s or Mendel’s accounts of heredity and evolution. What we get in those areas of natural science are narratives, natural histories, rather than the subsumptions of events under laws.

So I think that McDowell should not accept the bald naturalists’ view that there is a “distinctive form of intelligibility” found in the natural sciences and that it consists in relating events by laws. It would be better to say that what Davidson calls “strict laws” are the exception in natural science–nice if you can get them, but hardly essential to scientific explanation. It would be better to treat “natural science” as a name of an assortment of useful gimmicks…

I think we would do better to rid ourselves of the notion of “intelligibility” altogether. We should substitute the notion of techniques of problem-solving. Democritus, Newton, and Dalton solved problems with particles and laws. Darwin, Gibbon, and Hegel solved others with narratives. Carpenters solve others with hammers and nails, and soldiers still others with guns.

Pragmatism as anti-authoritarianism, p. 182-184

And elsewhere:

Scientific progress is a mater of integrating more and more data into a coherent web of belief–data from microscopes and telescope with data obtained by the naked eye, data forced into the open by experiment with data with has always been lying about.

Pragmatism as anti-authoritarianism p. 136

Rorty is looking to center epistemology on people. And of course in his earlier work rejects the idea that true belief is about correctly mirroring an external world. So how should we think about what seems like an external world?

The only other sense of “social construction” that I can think of is the one I referred to earlier: the sense in which bank accounts are social constructions but giraffes are not. Here the criterion is simply causal. The causal factors which produced giraffes did not include human societies, but those which produced bank accounts did.

Pragmatism as anti-authoritarianism, p. 140

David Weinberger says the success of machine learning models (MLMs) challenges Western ideas about scientific laws

Our encounter with MLMs doesn’t deny that there are generalisations, laws or principles. It denies that they are sufficient for understanding what happens in a universe as complex as ours. The contingent particulars, each affecting all others, overwhelm the explanatory power of the rules and would do so even if we knew all the rules. For example, if you know the laws governing gravitational attraction and air resistance, and if you know the mass of a coin and of Earth, and if you know the height from which the coin will be dropped, you can calculate how long it will take the coin to hit the ground. That will likely be enough to meet your pragmatic purpose. But the traditional Western framing of it has overemphasised the calm power of the laws. To apply the rules fully, we would have to know every factor that has an effect on the fall, including which pigeons are going to stir up the airflow around the tumbling coin and the gravitational pull of distant stars tugging at it from all directions simultaneously. (Did you remember to include the distant comet?) To apply the laws with complete accuracy, we would have to have Laplace’s demon’s comprehensive and impossible knowledge of the Universe.

That’s not a criticism of the pursuit of scientific laws, nor of the practice of science, which is usually empirical and sufficiently accurate for our needs­­­ – even if the degree of pragmatic accuracy possible silently shapes what we accept as our needs. But it should make us wonder why we in the West have treated the chaotic flow of the river we can’t step into twice as mere appearance, beneath which are the real and eternal principles of order that explain that flow. Why our ontological preference for the eternally unchanging over the eternally swirling water and dust?

Here is the Aeon essay.

One I read but left out was Steven Pinker’s Rationality which I won’t try to sum up here in part because it’s not about science per se.

I guess having clipped all that together I’ll end with some posts I’ve done in the past few years on or related to epistemology:

Objectivity as a social accomplishment

Here is an excellent characterization of scientific objectivity as a social practice, from Naomi Oreskes in her book Why Trust Science:

Sociologists of scientific knowledge stressed that science is a social activity, and this has been taken by many (for both better and worse) as undermining its claims to objectivity. The “social,” particularly to many scientists but also many philosophers, was synonymous with the personal, the subjective, the irrational, the arbitrary, and even the coerced. If the conclusions of scientists–who for the most part were European or North American men–were social constructions, then they had no more or less purchase on truth [than] the conclusions of other social groups. At least, a good deal of work in science studies seemed to imply that. But feminist philosophers of science, most notably Sandra Harding and Helen Longino, turned that argument on its head, suggesting that objectivity could be reenvisaged as a social accomplishment, something that is collectively achieved…

The greater the diversity and openness of a community and the stronger its protocols for supporting free and open debate, the greater the degree of objectivity it may be able to ahieve as individual biases and background assumptions are “outed,” as it were, by the community. Put another way: objectivity is likely to be maximized when there are recognized and robust avenues for criticism, such as peer review, when the community is open, non-defensive, and responsive to criticism, and when the community is sufficiently diverse that a broad range of views can be developed, heard, and appropriately considered…

To recapitulate: There is now broad agreement among historians, philosophers, sociologists, and anthropologists of science that there is no (singular) scientific method, and that scientific practice consists of communities of people, making decisions for reasons that are both empirical and social, using diverse methods. But this leaves us with the question: If scientists are just people doing work, like plumbers or nurses or electricians, and if our scientific theories are fallible and subject to change, then what is the basis for trust in science?

I suggest that our answer should be two-fold: 1) its sustained engagement with the world and 2) its social character….

This [first] consideration–that scientists are in our society the experts who study the world–is a reminder to scientists of the importance of foregrounding the empirical character of their work–their engagement with nature and society and the empirical basis for their conclusions…

However, reliance on empirical evidence alone is insufficient for understanding the basis of scientific conclusions and therefore insufficient for establishing trust in science. We must also take to heart–and explain–the social character of science and the role it plays in vetting claims.

Why Trust Science? Naomi Oreskes, p. 50-57

The book’s initial essay, from which this is drawn, is not only interesting in its own right but is a really concise overview of the philosophy of science and its twists and turns over time.

Better markets, but more or less?

Luigi Zingales has a good op-ed in Project Syndicate that summarizes a case he’s been making for years:

But this opposition of state and market is misleading, and it poses a major obstacle to understanding and addressing today’s policy challenges. The dichotomy emerged in the nineteenth century, when arcane government rules, rooted in a feudal past, were the main obstacle to the creation of competitive markets. The battle cry of this quite legitimate struggle was later raised to the principle of laissez-faire, ignoring the fact that markets are themselves institutions whose efficient functioning depends on rules. The question is not whether there should be rules, but rather who should set them, and in whose interest… In sum, we should strive to achieve a better state and better markets, and to contain each within its respective spheres.

Luigi has done more than anyone in the past decade to clarify that being “pro-market” or “pro-competition” doesn’t mean being laissez-faire and that it isn’t the same as being “pro-business.” And while that view began, in my estimation, as a pragmatic center-right idea (keep the appreciation of markets, lose the coziness with business) it won over some major adherents on the left. Most notably, Elizabeth Warren framed her progressive economic policy as pro-competition, and claimed she was a “capitalist to my bones.”

How might we think about the difference between Zingales and Warren on these issues? Certainly one might dive into specific policy areas and look for disagreements. But I’ve come to think of them as agreeing on the idea of better markets but parting ways over how much markets should structure the economy.

Although Zingales notes plenty of room for government to play an important role (see the op-ed for more), I think of him as wanting better markets and more markets. If the rules surrounding markets were written to be more pro-competitive, then markets would be able to take on even more tasks than they already do. I’m not certain this is what he thinks, but this is how I read his general perspective.

Warren, by contrast, I think basically wants better markets and less market control. She’d increase the government’s role not only as rule-setter but as provider of various goods, while simultaneously trying to make markets work better within a more limited sphere.

Of course nearly everyone would say, all else equal, that they want competitive, less corrupt markets over monopolistic ones (unless it’s one they personally benefit from). But it’s telling that some camps choose to prioritize this idea and others don’t. If these two dimensions are real, we can structure debate over the role of markets and business like this:

Better marketsStatus quo
More market control Zingales“Pro-business”
Less market controlWarrenSocialist

I outline all this because I think the left column contains a fascinating disagreement. If we could overcome some political-economy issues to get better, more competitive markets, what new uses might we put them to? Might we decide that more spheres work well under the control of regulated markets with the right rules? Or, having made that progress on political economy issues, might we find ourselves better able to write good rules to effectively use non-market institutions for things we currently leave to markets? Might we end up relying more on universities or open source communities or direct government provision?

The central challenge here, no matter where you land on these questions, is how to make progress on the political economy issues that limit competition. But if we could write the kind of rules we need to make markets truly competitive, would we use them more? Or less?