Notes on the “skills gap”, SBTC, and the college wage premium

The idea of a “skills gap” is both confusing and overblown. The worst version of it was in the early post-recession years, when some people managed to convince themselves high unemployment was related to a “skills gap.” With unemployment now at ~4% they must really think Americans’ skills have improved!

There are still other incorrect “skills gap” theories out there. This post won’t really address them, but instead will clip together some stuff and make a few quick points that are at best loosely related. My basic view is that the “gap” part of “skills gap” confuses as much as it illuminates, and that the “skills” part, though overrated in some quarters, is equally underrated in others.

Who loves market failures? One thing that’s frustrating about the “skills gap” conversation is that it somehow manages to flip everyone’s typical ideological commitments in terms of market efficiency. Left-leaning folks, normally predisposed to see market failures everywhere — especially in the labor market — suddenly switch into a model of competitive labor markets and suggest that if employers really needed more skills, they’d pay higher wages to get them (or pay for training). Right-leaning folks, meanwhile, suddenly find in the “skills gap” story a market failure they can get behind! (I am not taking sides on this, however if you want to read about potential market failures related to skills and hiring, this is a good start.)

Supply and demand clearly matter in labor markets. This gripe isn’t specific to the discussion of skills. Basically, supply and demand are never complete explanations of anything. The model of supply-and-demand in a competitive market is always partial explanation, and often also a useful baseline. It’s a more- or less-good explanation depending on the context. It’s particularly ill-suited to labor markets, which are different and complex for a whole bunch of reasons. But that doesn’t mean supply-and-demand has nothing to say about labor markets. In fact, it seems they often can explain a very large chunk of what’s going on. Here’s one source: “The law of one wage, however, provides a surprisingly good first approximation of the structure of U.S. wages.” And here’s a piece I wrote about this. Some people are too quick to jettison supply and demand for skills as an explanation of the labor market, rather than combining it with other factors.

On skill-biased technological change and the college wage premium. This is another case where the move is 1) say that a model can’t fully explain what’s going on, then 2) dismiss it entirely even though it’s still totally useful. When I wrote about this I interviewed David Autor, who’s done a lot of the work on this:

The original idea of a “race between education and technology” — or “skill-biased technological change,” as it’s known in academia — posited that new technologies increase the demand for skilled workers. Therefore, when technology progresses faster than the supply of college graduates, the wage premium for college graduates will rise.

“As a rough depiction of 100 years of data, that’s a pretty good summary description,” said Autor, who specializes in this area.

The main reason left-leaning economists are growing dismissive of this seems to be the fact that the college wage premium hasn’t grown much since 2000. Again, at best that’s a reason to look for other partial explanations, not to toss this one aside. But a couple interesting data points on this: here’s a paper attributing the flattening of the college wage premium to housing costs in cities. Moreover, the college wage premium was always a proxy for skill. It’s possible it’s simply become a worse proxy over time. It’s worth noting that an analysis by Brookings found that digital jobs paid better even after controlling for education, suggesting to me that a better measure of skills — particularly related to digital technology — might not show a flattening premium. Here’s a comprehensive review of the evidence on skills and the labor market from Autor in Science.

What about the recession? I started by knocking the idea that post-recession unemployment was the result of a skills gap. There is a bit of evidence on a different but vaguely related idea, that after the recession employers started demanding more skills in job applications. There are a couple stories you can tell (monopsony power?) but even if it’s that the recession sort of “kick starts” technology investment, it’s just not plausible that this is a major explanation of unemployment post-recession.

Labor markets are complicated. Throwing out skills-based explanations of the labor market is as foolish as pretending they explain everything.

Paper on experts, non-experts, and forecasting accuracy

This paper is super cool. I have not read it in full yet — just came across it:

When it comes to forecasting future research results, who knows what? We have attempted to provide systematic evidence within one particular setting, taking advantage of forecasts by a large sample of experts and of non-experts regarding 15 different experimental treatments. Within this context, forecasts carry a surprising amount of information, especially if the forecasts are aggregated to form a wisdom-of-crowds forecast. This information, however, does not reside with traditional experts. Forecasters with higher vertical, horizontal, or contextual expertise do not make more accurate forecasts. Furthermore, forecasts by academic experts are more informative than forecasts by non-experts only if a measure of accuracy in ‘levels’ is used. If forecasts are used just to rank treatments, non-experts, including even an easy-to-recruit online sample, do just as well as experts. Thus, the answer to the who part of the question above is intertwined with the answer to the what part. Even if one restricts oneself to the accuracy in ‘levels’ (absolute error and squared error), one can select non-experts with accuracy meeting, or exceeding, that of the experts. Therefore, the information about future experimental results is more widely distributed than one may have thought. We presented also a simple model to organize the evidence on expertise. The current results, while just a first step, already draw out a number of implications for increasing accuracy of research forecasts. Clearly, asking for multiple opinions has high returns. Further, traditional experts may not necessarily offer a more precise forecast than a well-motivated audience, and the latter is easier to reach. One can then screen the non-experts based on measures of effort, confidence, and accuracy on a trial question.

“Levels” basically means how well something does, whereas “order” just means rank things from best to worst. If all you need to know is what’s better and what’s worse, experts aren’t that great. But if you need to know how much better, they outperform most people.

What else predicts accuracy?

The revealed-ability variable plays an important role: the prediction without it does not achieve the same accuracy. Thus, especially if it is possible to observe the track record, even with a very short history (in this case we use just one forecast), it is possible to identify subsamples of non-expert forecasters with accuracy that matches or surpasses the accuracy of expert samples.

And, who do we think will be good at forecasting?

Figure 13 plots the beliefs of the 208 experts compared with the actual accuracy for the specified group of forecasters. The first cell indicates that the experts are on average accurate about themselves, expecting to get about 6 forecasts ‘correct’, in line with the realization. Furthermore, as the second cell shows, the experts expect other academics to do on average somewhat better than them, at 6.7 correct forecasts. Thus, this sample of experts does not display evidence of overconfidence (Healy and Moore, 2008), possibly because the experts were being particularly cautious not to fall into such a trap. The key cells are the next ones, on the expected accuracy for other groups. The experts expect the 15 most-cited experts to be somewhat more accurate when the opposite is true. They also expect experts with a psychology PhD to be more accurate where, once again, the data points if anything in the other direction. They also expect that PhD students would be significantly less accurate, whereas the PhD students match the experts in accuracy. The experts also expect that the PhD students with expertise in behavioral economics would do better, which we do not find. The experts correctly anticipate that MBA students and MTurk workers would do worse. However, they think that having experienced the task among the MTurkers would raise noticeably the accuracy, counterfactually.

I see this as broadly consistent with Tetlock: some people do better than others when it comes to forecasting (empirical judgment). Expertise does seem to help somewhat — but with caveats. And the people who do best are not always the ones you’d expect.

Quantifying and oversimplifying are two different things

Consider this bit from a recent New Yorker piece on whether economists and humanists can get along:

“Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation.”

Those two sentences are not remotely close to describing the same thing! Using equations to model human behavior does require some simplification. But they don’t commit you to believing a single, unifying explanation anymore than explaining things in words ensures subtlety.

I’m currently reading The Model Thinker by Scott Page, which perfectly illustrates this point. Page advances two ideas: first, that mathematical models of human behavior are useful, and second that many different models are better than just one.

Page is right on both counts. One might still object to the constant desire to quantify or to an over-reliance on formal models. You might even think those two errors are correlated. But they’re not the same thing, and the existence of one can’t fully explain the other.

The streaming strategy puzzle

Netflix’s chief content officer Ted Sarandos, in 2013:

“The goal is to become HBO faster than HBO can become us.”

And Netflix CEO Reed Hastings in 2017:

We’re not trying to meet all needs. So, Amazon’s business strategy is super broad. Meet all needs. I mean, the stuff that will be in Prime in five or ten years will be amazing, right? And so we can’t try to be that — we’ll never be as good as them at what they’re trying to be. What we can be is the emotional connection brand, like HBO or Netflix. So, think of it as they’re trying to be Walmart, we’re trying to be Starbucks. So, super focused on one thing that people are very passionate about.

But here’s Sarandos in 2018:

“There’s no such thing as a Netlflix show… Our brand is personalization.”

Meanwhile, at HBO:

During a closed-door June 19 conversation with some 150 HBO staff, “Stankey never uttered the word ‘Netflix,’ but he did suggest that HBO would have to become more like a streaming giant to thrive in the new media landscape,”

What is going on? Does Netflix want to be HBO or does it want to be Amazon? Someone should at least have the decency to tell HBO the answer, because it matters a lot for how closely the channel should be copying Netflix.

(No links because I’m on my phone.)