Two stories about competition and market power

Here’s an interesting interview with Elizabeth Warren:

Foer: There are all these hints of Louis Brandeis in what you do. Brandeis had a vision of how the economy could be structured differently when the rules that he wanted were applied. He  favored the small shopkeeper. In your vision, who gets favored? Are there forces in the market that you feel like are being unfairly shackled that you want to see unleashed?

Warren: Yes. Perfect. Competition. I love competition. I want to see every start-up business, everybody who’s got a good idea, have a chance to get in the market and try. This is what’s so interesting to me. There are so many people right now who argue against these reforms and other reforms, who claim they are pro-business. They’re not. They’re pro-monopoly. They’re pro–concentration of power, which crushes competition.

This is where the political and the economic interact. Once a corporation climbs up the ladder so that it’s got hundreds of millions—no, so that it’s got billions of dollars in resources—today too many of them turn around and use those resources to influence government to cut off that ladder, so nobody else climbs it. To cut off that ladder so that the big guys don’t have to compete with the little guys anymore.

Vox’s Weeds podcast had a good episode on Warren’s corporate governance plan that included analysis of this side of Warren — the fact that she thinks about how markets should work more than many progressives, rather than focusing just on what government should be doing. Foer has also covered this before in The Atlantic. I quote the relevant passage here.

But I want to compare Warren’s quote to the conclusion of a review paper by MIT’s John Van Reenen, who summed up the evidence on market power recently at the central bankers’ Jackson Hole meeting. Van Reenen:

Increased concentration brings with it the concern of market power and indeed, some have argued that many of the economic ills we face today in terms of sluggish productivity and real wage growth are due to rising monopoly power. My view is that this conclusion is premature. Rising aggregate markups and concentration may also reflect changes in the nature of competition where superstar firms are rewarded with greater market share in “winner take most” markets. I have offered some evidence more in line with the nuanced superstar firm model than a general fall in competition due to anti-trust and regulation. But this is for sure not the final paper in this area, however, and there are substantial uncertainties.

A final word of warning. Even if it was the case that the world is closer to the superstar firm model, this does not mean that anti-trust policy should be relaxed. Even if superstar firms attain their currently dominant positions on their merits of out-competing rivals, it does not mean that they will always use their power for the good of consumers. They may well try to entrench their position through lobbying, erecting entry barriers and buying up future rivals. As larger parts of the modern economy become winner take most/all, it is important that competition authorities develop better tools for understanding harm to innovation and future competition, rather than the traditional emphasis on the pricing decisions of current rivals.

In my view, as someone following this evidence closely, Van Reenen nails it. He takes seriously the idea that market power has risen as a result of rent-seeking and anticompetitive behavior. And he takes seriously the alternative that technology and globalization have changed competition in ways that made some firms bigger and more productive. He notes that while the latter may sound more optimistic than the former (and probably is less harmful), it’s hardly benign.

Returning to Warren… it’s tempting to frame her view as belonging to the Jefferson-Hamilton debate that’s been going on since America’s founding, in which one side prefers industry and is fine with bigness, while the other prioritizes the little guy. That may be one productive way to think about it.

But I prefer to think of it this way: two things have happened in the U.S. economy over the past 30 or so years. First, information technology has dramatically changed the nature of the economy, of firms, and of how they compete. Second, corporations have become more powerful for a whole variety of reasons, and along the way become better able to shape competition in their favor. In some ways these are separate trends; we could have had one without the other. But to an extent they’re related. As Van Reenen notes, concerns that grow large due to technology can then turn their power toward rigging the game, what Zingales calls the “Medici Cycle.” Just as important, firms that are large but aren’t great at technology and so are threatened by digital competitors may up their lobbying and other rent-seeking activities in order to “compete”. See: massive consolidation in the media business in response to Netflix. If you can’t beat ’em, the theory goes, get bigger.

It’s not just that Warren is stepping onto the scene and offering a Jeffersonian view. It’s that she realizes the rise of corporate power over decades has had pernicious effects in the form of rent-seeking and anticompetitive behavior. An agenda that seeks to limit that power would likely do tremendous good. However, it’s important that advocates of this view recognize that it’s only half the story. There’s another major reason why big firms have gotten bigger, why certain firms pay better than others, why workers have less bargaining power, etc. That’s the role of technology. It doesn’t invalidate the other theory. But it calls for different remedies, and so it’s important to simultaneously keep both accounts in mind.

Crowdsourced priors, aka prices

A crowd of experts can forecast future research results; by some measures a crowd of non-experts can, too. (At least, when the crowd results are properly aggregated.) I posted about that result a while back, now here’s new work on replication and prediction markets. Ed Yong at The Atlantic, via Tyler Cowen:

Consider the new results from the Social Sciences Replication Project, in which 24 researchers attempted to replicate social-science studies published between 2010 and 2015 in Nature and Science—the world’s top two scientific journals. The replicators ran much bigger versions of the original studies, recruiting around five times as many volunteers as before. They did all their work in the open, and ran their plans past the teams behind the original experiments. And ultimately, they could only reproduce the results of 13 out of 21 studies—62 percent.

As it turned out, that finding was entirely predictable. While the SSRP team was doing their experimental re-runs, they also ran a “prediction market”—a stock exchange in which volunteers could buy or sell “shares” in the 21 studies, based on how reproducible they seemed. They recruited 206 volunteers—a mix of psychologists and economists, students and professors, none of whom were involved in the SSRP itself. Each started with $100 and could earn more by correctly betting on studies that eventually panned out.

At the start of the market, shares for every study cost $0.50 each. As trading continued, those prices soared and dipped depending on the traders’ activities. And after two weeks, the final price reflected the traders’ collective view on the odds that each study would successfully replicate. So, for example, a stock price of $0.87 would mean a study had an 87 percent chance of replicating. Overall, the traders thought that studies in the market would replicate 63 percent of the time—a figure that was uncannily close to the actual 62-percent success rate.

The traders’ instincts were also unfailingly sound when it came to individual studies. Look at the graph below. The market assigned higher odds of success for the 13 studies that were successfully replicated than the eight that weren’t—compare the blue diamonds to the yellow diamonds.

Notes on the “skills gap”, SBTC, and the college wage premium

The idea of a “skills gap” is both confusing and overblown. The worst version of it was in the early post-recession years, when some people managed to convince themselves high unemployment was related to a “skills gap.” With unemployment now at ~4% they must really think Americans’ skills have improved!

There are still other incorrect “skills gap” theories out there. This post won’t really address them, but instead will clip together some stuff and make a few quick points that are at best loosely related. My basic view is that the “gap” part of “skills gap” confuses as much as it illuminates, and that the “skills” part, though overrated in some quarters, is equally underrated in others.

Who loves market failures? One thing that’s frustrating about the “skills gap” conversation is that it somehow manages to flip everyone’s typical ideological commitments in terms of market efficiency. Left-leaning folks, normally predisposed to see market failures everywhere — especially in the labor market — suddenly switch into a model of competitive labor markets and suggest that if employers really needed more skills, they’d pay higher wages to get them (or pay for training). Right-leaning folks, meanwhile, suddenly find in the “skills gap” story a market failure they can get behind! (I am not taking sides on this, however if you want to read about potential market failures related to skills and hiring, this is a good start.)

Supply and demand clearly matter in labor markets. This gripe isn’t specific to the discussion of skills. Basically, supply and demand are never complete explanations of anything. The model of supply-and-demand in a competitive market is always partial explanation, and often also a useful baseline. It’s a more- or less-good explanation depending on the context. It’s particularly ill-suited to labor markets, which are different and complex for a whole bunch of reasons. But that doesn’t mean supply-and-demand has nothing to say about labor markets. In fact, it seems they often can explain a very large chunk of what’s going on. Here’s one source: “The law of one wage, however, provides a surprisingly good first approximation of the structure of U.S. wages.” And here’s a piece I wrote about this. Some people are too quick to jettison supply and demand for skills as an explanation of the labor market, rather than combining it with other factors.

On skill-biased technological change and the college wage premium. This is another case where the move is 1) say that a model can’t fully explain what’s going on, then 2) dismiss it entirely even though it’s still totally useful. When I wrote about this I interviewed David Autor, who’s done a lot of the work on this:

The original idea of a “race between education and technology” — or “skill-biased technological change,” as it’s known in academia — posited that new technologies increase the demand for skilled workers. Therefore, when technology progresses faster than the supply of college graduates, the wage premium for college graduates will rise.

“As a rough depiction of 100 years of data, that’s a pretty good summary description,” said Autor, who specializes in this area.

The main reason left-leaning economists are growing dismissive of this seems to be the fact that the college wage premium hasn’t grown much since 2000. Again, at best that’s a reason to look for other partial explanations, not to toss this one aside. But a couple interesting data points on this: here’s a paper attributing the flattening of the college wage premium to housing costs in cities. Moreover, the college wage premium was always a proxy for skill. It’s possible it’s simply become a worse proxy over time. It’s worth noting that an analysis by Brookings found that digital jobs paid better even after controlling for education, suggesting to me that a better measure of skills — particularly related to digital technology — might not show a flattening premium. Here’s a comprehensive review of the evidence on skills and the labor market from Autor in Science.

What about the recession? I started by knocking the idea that post-recession unemployment was the result of a skills gap. There is a bit of evidence on a different but vaguely related idea, that after the recession employers started demanding more skills in job applications. There are a couple stories you can tell (monopsony power?) but even if it’s that the recession sort of “kick starts” technology investment, it’s just not plausible that this is a major explanation of unemployment post-recession.

Labor markets are complicated. Throwing out skills-based explanations of the labor market is as foolish as pretending they explain everything.

Paper on experts, non-experts, and forecasting accuracy

This paper is super cool. I have not read it in full yet — just came across it:

When it comes to forecasting future research results, who knows what? We have attempted to provide systematic evidence within one particular setting, taking advantage of forecasts by a large sample of experts and of non-experts regarding 15 different experimental treatments. Within this context, forecasts carry a surprising amount of information, especially if the forecasts are aggregated to form a wisdom-of-crowds forecast. This information, however, does not reside with traditional experts. Forecasters with higher vertical, horizontal, or contextual expertise do not make more accurate forecasts. Furthermore, forecasts by academic experts are more informative than forecasts by non-experts only if a measure of accuracy in ‘levels’ is used. If forecasts are used just to rank treatments, non-experts, including even an easy-to-recruit online sample, do just as well as experts. Thus, the answer to the who part of the question above is intertwined with the answer to the what part. Even if one restricts oneself to the accuracy in ‘levels’ (absolute error and squared error), one can select non-experts with accuracy meeting, or exceeding, that of the experts. Therefore, the information about future experimental results is more widely distributed than one may have thought. We presented also a simple model to organize the evidence on expertise. The current results, while just a first step, already draw out a number of implications for increasing accuracy of research forecasts. Clearly, asking for multiple opinions has high returns. Further, traditional experts may not necessarily offer a more precise forecast than a well-motivated audience, and the latter is easier to reach. One can then screen the non-experts based on measures of effort, confidence, and accuracy on a trial question.

“Levels” basically means how well something does, whereas “order” just means rank things from best to worst. If all you need to know is what’s better and what’s worse, experts aren’t that great. But if you need to know how much better, they outperform most people.

What else predicts accuracy?

The revealed-ability variable plays an important role: the prediction without it does not achieve the same accuracy. Thus, especially if it is possible to observe the track record, even with a very short history (in this case we use just one forecast), it is possible to identify subsamples of non-expert forecasters with accuracy that matches or surpasses the accuracy of expert samples.

And, who do we think will be good at forecasting?

Figure 13 plots the beliefs of the 208 experts compared with the actual accuracy for the specified group of forecasters. The first cell indicates that the experts are on average accurate about themselves, expecting to get about 6 forecasts ‘correct’, in line with the realization. Furthermore, as the second cell shows, the experts expect other academics to do on average somewhat better than them, at 6.7 correct forecasts. Thus, this sample of experts does not display evidence of overconfidence (Healy and Moore, 2008), possibly because the experts were being particularly cautious not to fall into such a trap. The key cells are the next ones, on the expected accuracy for other groups. The experts expect the 15 most-cited experts to be somewhat more accurate when the opposite is true. They also expect experts with a psychology PhD to be more accurate where, once again, the data points if anything in the other direction. They also expect that PhD students would be significantly less accurate, whereas the PhD students match the experts in accuracy. The experts also expect that the PhD students with expertise in behavioral economics would do better, which we do not find. The experts correctly anticipate that MBA students and MTurk workers would do worse. However, they think that having experienced the task among the MTurkers would raise noticeably the accuracy, counterfactually.

I see this as broadly consistent with Tetlock: some people do better than others when it comes to forecasting (empirical judgment). Expertise does seem to help somewhat — but with caveats. And the people who do best are not always the ones you’d expect.

Quantifying and oversimplifying are two different things

Consider this bit from a recent New Yorker piece on whether economists and humanists can get along:

“Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation.”

Those two sentences are not remotely close to describing the same thing! Using equations to model human behavior does require some simplification. But they don’t commit you to believing a single, unifying explanation anymore than explaining things in words ensures subtlety.

I’m currently reading The Model Thinker by Scott Page, which perfectly illustrates this point. Page advances two ideas: first, that mathematical models of human behavior are useful, and second that many different models are better than just one.

Page is right on both counts. One might still object to the constant desire to quantify or to an over-reliance on formal models. You might even think those two errors are correlated. But they’re not the same thing, and the existence of one can’t fully explain the other.

The streaming strategy puzzle

Netflix’s chief content officer Ted Sarandos, in 2013:

“The goal is to become HBO faster than HBO can become us.”

And Netflix CEO Reed Hastings in 2017:

We’re not trying to meet all needs. So, Amazon’s business strategy is super broad. Meet all needs. I mean, the stuff that will be in Prime in five or ten years will be amazing, right? And so we can’t try to be that — we’ll never be as good as them at what they’re trying to be. What we can be is the emotional connection brand, like HBO or Netflix. So, think of it as they’re trying to be Walmart, we’re trying to be Starbucks. So, super focused on one thing that people are very passionate about.

But here’s Sarandos in 2018:

“There’s no such thing as a Netlflix show… Our brand is personalization.”

Meanwhile, at HBO:

During a closed-door June 19 conversation with some 150 HBO staff, “Stankey never uttered the word ‘Netflix,’ but he did suggest that HBO would have to become more like a streaming giant to thrive in the new media landscape,”

What is going on? Does Netflix want to be HBO or does it want to be Amazon? Someone should at least have the decency to tell HBO the answer, because it matters a lot for how closely the channel should be copying Netflix.

(No links because I’m on my phone.)

The correlation between social welfare and innovation

I was refreshing a bit in the R programming language today, and to do so I did some exploratory data analysis on countries’ innovative capacities and measures of human welfare.

Not surprisingly, innovativeness correlates with GDP per capita and with measures of social progress. Combining those measures predicts innovativeness better than either measure on its own.

This sort of exploratory analysis is too simple to mean much, but intuitively it makes sense to expect a causal relationship in both directions. We’d expect innovative countries to be wealthier, and even better off by other measures, since innovation drives all sorts of progress, not all of which is captured in GDP. Likewise, we’d expect countries that rank highly in measures of human progress to be more innovative, since innovation depends on human capital.

And, of course, there is all sorts of research on this stuff. Nonetheless, nice to see it shows up in a simple correlational analysis. Code is here.