Tech companies, capabilities, and rents

Tyler Cowen asks: could the tech companies run everything better?

Under one view, the major tech companies lucked into some pieces of rapidly scalable software…

Under the second view, the major tech companies have developed new managerial technologies for hiring, handling, and motivating super-smart employees…

Yet a third view starts with the idea of labor scarcity, at least for the very talented folks…

(There’s a bit more in the full post.)

A similar question is: What is the source of tech firms’ rents? Firms can make above market profits for many different reasons. Are their rents the result of economies of scale (consistent with Cowen #1)? Are their rents the result of superior capabilities (Cowen #2)? #3 is a bit trickier to articulate without mentioning #1 or #2, but remains worth having in the mix.

The answer seems to me to matter quite a bit, in terms of how worried we should be about the tech firms’ market power — at least in purely economic terms.

Consider these quotes:

“The power of new tech giants to use their potent networks and the vast amounts of data they collect to thwart competition is one of the biggest challenges facing antitrust authorities today.” David Wessel, Brookings

“Some firms clearly ‘get it’ and others don’t.” Chiara Criscuolo, OECD

David’s quote articulates a view of tech rents in line with #1. Chiara’s can be interpreted as closer to #2. I have yet to see anyone demonstrate empirically which is more accurate.

McKinsey Global Institute on industry concentration

There’s a bunch in their new report on productivity growth that relates, but here’s their bottom line:

In our sector deep dives, we find evidence of increased business concentration and consolidation but do not find that rising concentration has contributed to the decline in productivity growth.1 We continue to find evidence of strong competitive pressure and innovation across industries. However, we caution that this may not be the case in the future if changes in industry structure reduce competitive intensity as well as incentives to innovate and improve operational performance.

A concise introduction to organizations

I recently wrote a series of posts laying out some basics on how I think about organizations. I’ve collected links and summaries here.

How to think about organizations

According to sociologist Richard Scott, “Organizations are groups whose members coordinate their behavior in order to accomplish shared goals or to put out a product.”

Loosely, I think about organizations through three lenses: the market-based view, the managerial view, and the sociological view.

How are organizations organized?

A quick tour of the difference between functional and divisional structures, plus matrix structures, the employee-centric model, and the crowd-centric model.

What makes an organization succeed?

You need to offer something people want; sell it for more than it costs to provide; and have some reason you can’t be copied. Plus: a very concise description of good management.

What are organizations for?

It’s a mistake to answer this question by jumping into the debate over shareholder capitalism. The broader purpose of organizations is to organize resources in a way that helps achieve some social goal. Profits are supposed to be an incentive to do that.

What are organizations for?

The historian Ian Morris defines the social development of a society as “the bundle of technological, subsistence, organizational, and cultural accomplishments through which people feed, clothe, house, and reproduce themselves, explain the world around them, resolve disputes within their communities, extend their power at the expense of other communities, and defend themselves against others’ attempts to extend power.” One of the key components of social development, he continues, is organization (singular):

To be able to deploy energy for food, clothing, housing, reproduction, defense, and aggression, humans have to be able to organize it. Just as organisms break down without energy, societies break down without organization.

Morris measures organization by city size, but I mention it here because in thinking about organizations (plural) it’s worth starting with their ultimate purpose: to deploy society’s resources in useful ways that achieve the society’s goals.

It’s easy, when talking about the purpose of organizations, to slide directly into the shareholder primacy debate. (Here are a few links on that.) But this broader purpose is upstream of any particular corporate governance regime. The ultimate defense of organizations should make reference to the needs of society in general, however we decide to structure their obligations to particular groups of stakeholders.

This is why corporate mission statements actually are important. They might not always be accurate or specific, but asking for one is a way of posing the basic question of justification. What is the purpose of your organization? What socially useful goal have you set for yourself?

To get a bit more specific, as I wrote in a piece about shareholder value:

The right way to think about companies’ job in the economy [is] to create real economic value, not just paper value, and not just to transfer value from one group to another. The main way to create value is through innovation.

And from another piece:

Profits are supposed to be an incentive to create valuable products and new innovations, not a reward for lobbying regulators or being the first company to scale in a particular industry.

Luigi Zingales had a good quote in a paper about this:

Most firms are actively engaged in protecting their source of competitive advantage: through a mixture of innovation, lobbying, or both. As long as most of the effort is along the first dimension, there is little to be worried about. The fear of being overtaken pushes firms to innovate. What is more problematic is when a lot of effort is put into lobbying. In other words, the problem here is not temporary market power. The expectation of some temporary market power based on innovation is the driver of much innovation and progress.

When we talk about “creating value” in this context, it’s not just about financial value; it’s really shorthand for organizing resources in a useful way to achieve some social goal. An organization’s mission is supposed to be the ambition; profits are supposed to be the incentive; and, at least in a competitive market, innovation is the way you get it done.

Good non-technical resources for understanding machine learning

This will be a living post with links to resources I think are useful (or, in some cases, that I simply want to remember to look at) for non-ML-pros who want to understand machine learning and how it will change [work/society/etc]. Full disclosure, I worked on much of the HBR content. And while I vouch for a lot of the links below, like I said, some I haven’t yet read.

Overviews of machine learning or AI

An introduction to statistical learning (textbook)

Machine learning 101.

Andrew Ng: What Artificial Intelligence Can and Can’t Do Right Now

A visual introduction to machine learning

An NBER introduction to AI.

McKinsey’s introduction.

O’Reilly’s.

What Everyone Needs to Know About AI (book)

Andreessen Horowitz’s primer on AI and AI playbook

This MIT Technology Review article contains a fantastic plain english description of deep learning.

This slightly violates the non-technical rule, but there are many great MOOCs, several of them on Coursera. Andrew Ng’s classic ML course; his deep learning courses; and many more.

Intro to neural networks in 20 minutes 

The economics of machine learning

The single best starting point (and in book form)

Here is an NBER conference on the topic.

Big picture, trends, etc.

This CB Insights report is good on the VC-backed ML ecosystem.

Shivon Zilis’s various mappings are great comprehensive looks at the companies involved.

Benedict Evans on the next 10 years.

The New York Times on the “Great AI Awakening”

Erik Brynjolfsson and Andrew McAfee with the big picture

For managers

How to spot a machine learning opportunity even if you aren’t a data scientist

What every manager should know about machine learning

How to tell if machine learning can solve your business problem

Visualizing algorithms

StitchFix

HBR

Slightly different, but a good short tutorial on predicting who died in the Titanic

Other lists of links

from HBR ($)

The best ML resources.

On bias and social impact.

Events, newsletters, etc.

Data Elixir newsletter

Tech Review’s The Algorithm newsletter

Conference: Machine Learning and the Market for Machine Intelligence

Mergers, social science, and unmeasured interactions

From Tyler Cowen’s conversation with Matt Levine:

COWEN: If we think about mergers and acquisitions, one of the standard results in the empirical finance literature is that acquiring firms do fairly poorly. That is, acquisitions don’t seem to pay off. Yet, of course, acquisitions persist.

You’ve done M&A work in your life. How do you think about this process? If it doesn’t pay off, is it about empire building? Is it about winner’s curse?

Do you somehow not trust the data? You would challenge the interpretation of the result? Or how good are acquisitions for the acquiring firm? And what goes wrong?

LEVINE: I wouldn’t challenge the data. It’s a similar story to active management in some ways. The fact that M&A is bad doesn’t mean that your merger will be bad, right?

COWEN: [laughs]

This in fact directly relates to something I’ve been discussing this week: that even in a completely randomized social science experiment, there are likely to be unmeasured variables that interact with the thing you’re trying to measure. So, while you can be confident in the average or net effect of the causal treatment, it may not apply — even directionally — to a given individual case.

So, you can take Levine to be making a cynical point about our ability to delude ourselves. (Like when he says “People want to do stuff.”) Or, you can take him to be making a point that average effects are just that. That’s how I read him when he says:

The data is not overwhelming that all mergers are bad. The data is like, on average, they’re a little bad. So you say, “Here are the reasons why we are better.” Everyone can say that, and 49 percent of them will be right.

The point is that you could run a randomized experiment with a control in which you get one group of companies to go through with a merger, and another group not to. And even if your randomization worked, and both groups were actually similar across every possible dimension of interest (itself unlikely), there still might be causally important unmeasured variables. So, what if the entire causal model was: mergers make companies worse off, except if the CEO of the acquirer was previously an M&A lawyer, in which case it makes the acquirer better off. Assume that the study does not capture acquiring CEO background at this level of detail, and that the majority of acquisitions are by companies whose CEO was not previously an M&A lawyer.

In that case, the interaction between CEO background and mergers will go unnoticed. The main effect will still be valuable — especially for policymakers and others whose business is mostly about average and net effects — but for an individual CEO considering whether to acquire a company who is familiar with the data, the question remains: what unmeasured interaction variables might there be that could apply to me?

More formally:

Confounding by unmeasured Patient Variable X Treatment Variable interactions remains a possibility.

So, what then compels an individual to accept a social science finding in the context of their own decision? Even if they’re convinced that the main causal result is true on average, what’s to keep them from coming up with some plausible unmeasured interaction that applies to them and renders the result inapplicable?

A good answer comes from Steven Pinker:

In 1954, Paul Meehl stunned his fellow psychologists by showing that simple actuarial formulas outperform expert judgment in predicting psychiatric classifications, suicide attempts, school and job performance, lies, crime, medical diagnoses, and pretty much any other outcome in which accuracy can be judged at all. His conclusion about the superiority of statistical to intuitive judgment is now recognized as one of the most robust findings in the history of psychology.

Data, of course, cannot solve problems by themselves. All the money in the world could not pay for randomized controlled trials to settle every question that occurs to us. Human beings will always be in the loop to decide which data to gather and how to analyze and interpret them. The first attempts to quantify a concept are always crude, and even the best ones allow probabilistic rather than perfect understanding. Nonetheless, social scientists have laid out criteria for evaluating and improving measurements, and the critical comparison is not whether a measure is perfect but whether it is better than the judgment of an expert, critic, interviewer, clinician, judge, or maven. That turns out to be a low bar.

The reason not to search for unmeasured interactions that might render a social scientific result inapplicable is simply that we’re not very good at it. Usually, betting the average effect will beat your intuition, because intuition is colored by motivated reasoning.

To return to M&A, the two parts of Levine’s answer are related. On the one hand, the fact that mergers are, on average, value-destroying does not necessarily mean all of them are. On the other hand, clearly one big reason lots of mergers get done is executives’ desire to do something or to build an empire. The latter is the reason it’s usually wise to ignore the former.

Another answer, though, is that this is what good judgment is all about — knowing when to bet the average and when not to. In this view, Tetlock’s forecasters know that they should usually bet the average when making predictions, but their key skill is judiciously searching for exceptions. (This sort of parallels one argument for human + algorithm teams, in which the human occasionally adds information the algorithm doesn’t have. Of course, in practice it doesn’t necessarily work so well.)

So, if a CEO proposes a merger, how do you know if they’re an unthinking anti-science empire-builder, or a Tetlockian fox? I’m not sure I have a perfect answer. But I’d say the fox begins with data, and assumes the base rate, or in this case the average effect, as the starting point for conversation. Much of the time, the fox ends there. But, across many decisions, the fox sometimes seeks to improve upon the base rate, by adding information that the algorithm (study) didn’t include, even if the causal implications of that information are uncertain or based on experience or intuition.

We can say, based on the data, that most CEOs who take on mergers are probably biased empire builders who’d have been well-advised to bet the data. But some of them are foxes, and they know something the social scientists don’t.