Mergers, social science, and unmeasured interactions

From Tyler Cowen’s conversation with Matt Levine:

COWEN: If we think about mergers and acquisitions, one of the standard results in the empirical finance literature is that acquiring firms do fairly poorly. That is, acquisitions don’t seem to pay off. Yet, of course, acquisitions persist.

You’ve done M&A work in your life. How do you think about this process? If it doesn’t pay off, is it about empire building? Is it about winner’s curse?

Do you somehow not trust the data? You would challenge the interpretation of the result? Or how good are acquisitions for the acquiring firm? And what goes wrong?

LEVINE: I wouldn’t challenge the data. It’s a similar story to active management in some ways. The fact that M&A is bad doesn’t mean that your merger will be bad, right?

COWEN: [laughs]

This in fact directly relates to something I’ve been discussing this week: that even in a completely randomized social science experiment, there are likely to be unmeasured variables that interact with the thing you’re trying to measure. So, while you can be confident in the average or net effect of the causal treatment, it may not apply — even directionally — to a given individual case.

So, you can take Levine to be making a cynical point about our ability to delude ourselves. (Like when he says “People want to do stuff.”) Or, you can take him to be making a point that average effects are just that. That’s how I read him when he says:

The data is not overwhelming that all mergers are bad. The data is like, on average, they’re a little bad. So you say, “Here are the reasons why we are better.” Everyone can say that, and 49 percent of them will be right.

The point is that you could run a randomized experiment with a control in which you get one group of companies to go through with a merger, and another group not to. And even if your randomization worked, and both groups were actually similar across every possible dimension of interest (itself unlikely), there still might be causally important unmeasured variables. So, what if the entire causal model was: mergers make companies worse off, except if the CEO of the acquirer was previously an M&A lawyer, in which case it makes the acquirer better off. Assume that the study does not capture acquiring CEO background at this level of detail, and that the majority of acquisitions are by companies whose CEO was not previously an M&A lawyer.

In that case, the interaction between CEO background and mergers will go unnoticed. The main effect will still be valuable — especially for policymakers and others whose business is mostly about average and net effects — but for an individual CEO considering whether to acquire a company who is familiar with the data, the question remains: what unmeasured interaction variables might there be that could apply to me?

More formally:

Confounding by unmeasured Patient Variable X Treatment Variable interactions remains a possibility.

So, what then compels an individual to accept a social science finding in the context of their own decision? Even if they’re convinced that the main causal result is true on average, what’s to keep them from coming up with some plausible unmeasured interaction that applies to them and renders the result inapplicable?

A good answer comes from Steven Pinker:

In 1954, Paul Meehl stunned his fellow psychologists by showing that simple actuarial formulas outperform expert judgment in predicting psychiatric classifications, suicide attempts, school and job performance, lies, crime, medical diagnoses, and pretty much any other outcome in which accuracy can be judged at all. His conclusion about the superiority of statistical to intuitive judgment is now recognized as one of the most robust findings in the history of psychology.

Data, of course, cannot solve problems by themselves. All the money in the world could not pay for randomized controlled trials to settle every question that occurs to us. Human beings will always be in the loop to decide which data to gather and how to analyze and interpret them. The first attempts to quantify a concept are always crude, and even the best ones allow probabilistic rather than perfect understanding. Nonetheless, social scientists have laid out criteria for evaluating and improving measurements, and the critical comparison is not whether a measure is perfect but whether it is better than the judgment of an expert, critic, interviewer, clinician, judge, or maven. That turns out to be a low bar.

The reason not to search for unmeasured interactions that might render a social scientific result inapplicable is simply that we’re not very good at it. Usually, betting the average effect will beat your intuition, because intuition is colored by motivated reasoning.

To return to M&A, the two parts of Levine’s answer are related. On the one hand, the fact that mergers are, on average, value-destroying does not necessarily mean all of them are. On the other hand, clearly one big reason lots of mergers get done is executives’ desire to do something or to build an empire. The latter is the reason it’s usually wise to ignore the former.

Another answer, though, is that this is what good judgment is all about — knowing when to bet the average and when not to. In this view, Tetlock’s forecasters know that they should usually bet the average when making predictions, but their key skill is judiciously searching for exceptions. (This sort of parallels one argument for human + algorithm teams, in which the human occasionally adds information the algorithm doesn’t have. Of course, in practice it doesn’t necessarily work so well.)

So, if a CEO proposes a merger, how do you know if they’re an unthinking anti-science empire-builder, or a Tetlockian fox? I’m not sure I have a perfect answer. But I’d say the fox begins with data, and assumes the base rate, or in this case the average effect, as the starting point for conversation. Much of the time, the fox ends there. But, across many decisions, the fox sometimes seeks to improve upon the base rate, by adding information that the algorithm (study) didn’t include, even if the causal implications of that information are uncertain or based on experience or intuition.

We can say, based on the data, that most CEOs who take on mergers are probably biased empire builders who’d have been well-advised to bet the data. But some of them are foxes, and they know something the social scientists don’t.

Leave a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s