What causes recessions?

A few different resources explaining the various causes of recessions…

In 2019 I wrote a feature about firms and recessions, and I summed up the causes of recession this way:

Recessions… can be caused by economic shocks (such as a spike in oil prices), financial panics (like the one that preceded the Great Recession), rapid changes in economic expectations (the so-called “animal spirits” described by John Maynard Keynes; this is what caused the dot-com bubble to burst), or some combination of the three. Most firms suffer during a recession, primarily because demand (and revenue) falls and uncertainty about the future increases.

Here are the three categories that a Congressional Research Service report used in a 2019 brief on the causes of recessions:

Overheating
Recessions can be caused by an overheated economy, in which demand outstrips supply, expanding past full employment and the maximum capacity of the nation’s resources. Overheating can be sustained temporarily, but eventually spending will fall in order for supply to catch up to demand. A classic overheating economy has two key characteristics—rising inflation and unemployment below its “natural” rate…

Asset Bubbles
The last two recessions were arguably caused by overheating of a different type. While neither featured a large increase in price inflation, both featured the rapid growth and subsequent bursting of asset bubbles. The 2001 recession was preceded by the “dot-com” stock bubble, and the 2007-2009 recession was preceded by the housing bubble…

Economic Shocks
They can also be triggered by negative, unexpected, external events, which economists refer to as “shocks” to the economy that disrupt the expansion…
A classic example of a shock is the oil shocks of the 1970s and 1980s.

Here is the IMF:

There are a variety of reasons recessions take place. Some are associated with sharp changes in the prices of the inputs used in producing goods and services. For example, a steep increase in oil prices can be a harbinger of a recession. As energy becomes expensive, it pushes up the overall price level, leading to a decline in aggregate demand. A recession can also be triggered by a country’s decision to reduce inflation by employing contractionary monetary or fiscal policies. When used excessively, such policies can lead to a decline in demand for goods and services, eventually resulting in a recession.

Other recessions, such as the one that began in 2007, are rooted in financial market problems. Sharp increases in asset prices and a speedy expansion of credit often coincide with rapid accumulation of debt. As corporations and households get overextended and face difficulties in meeting their debt obligations, they reduce investment and consumption, which in turn leads to a decrease in economic activity. Not all such credit booms end up in recessions, but when they do, these recessions are often more costly than others. Recessions can be the result of a decline in external demand, especially in countries with strong export sectors.

And here is a bit from David Moss’s A Concise Guide to Macroeconomics, 2nd edition:

Anything that causes labor, capital, or [total factor productivity] to fall could potentially cause a decline in output… A massive earthquake, for example, could reduce output by destroying vast amounts of physical capital. Similarly, a deadly epidemic could reduce output by decimating the labor force…

In some cases, however, output may decline sharply even in the absence of any earthquakes or epidemics… [He cites the Great Depression.] The British economist John Maynard Keynes claimed to have the answer… His key insight, implied by the phrase ‘immaterial devices of the mind’, was that the problem was mainly one of expectations and psychology. For some reason, people had gotten it into their heads that the economy was in trouble, and that belief rapidly became self-fulfilling…. Driven by nothing more than expectations, which Keynes would later refer to as ‘animal spirits,’ the economy had fallen into a vicious downward spiral…

Starting around the time of Keynes, therefore, economists began to realize that there was more to economic growth than just the supply side. Demand mattered a great deal as well, particularly since it could sometimes fall short. In fact, over roughly the next 40 years, it became an article of faith among leading economists and government officials that it was the government’s responsibility to ‘manage demand’ through fiscal and monetary policy.

p. 22-26

There’s one other bit of Moss’s book that’s worth mentioning here. It’s an incredibly slim volume and so he faces the question of how to organize such a brief survey of macroeconomics. He chose to use three overarching topics: Output, money, and expectations.

His whole discussion of recessions in the Output section, because a recession is a sustained decline in economic output. But you could break the causes of recessions into his three buckets: Output = shocks to labor, capital, or productivity. Money = financial crises or policy-induced shifts to interest rates or the money supply. Expectations = Shifts in collective psychology that lower demand.

With all this sketched out, one of my biggest questions is how to think about 2000. CRS buckets it with 2008 as the bursting of an “asset bubble.” That feels strange to me. It’s true that both involved financial markets and asset bubbles. But 2000 seemed like a shift in expectations, that then led to an asset bubble bursting and a fairly mild recession. In 2008, an asset bubble was part of the story. But the real story was a financial crisis sparked by opacity and complexity.

I think if I were writing a paragraph on the causes of recession today I’d go with four buckets:

  1. Economic shocks to productive capacity
  2. Policy-induced declines in demand (monetary or fiscal policy)
  3. Changes in expectations (“animal spirits”)
  4. Financial crises

You could think of 2000 as sort of straddling 3 and 4 whereas the global financial crisis was really all 4. The Covid recession was 1. And if we have a recession in the next year it’ll be firmly in 2. Honestly, 3 feels like the rarest one, but I think that’s because most of the time it overlaps with either 2 or 4. Shifts in investor sentiment span 3 and 4, depending on the type of investors and the type of asset. Shifts in demand now almost always relate to 2, since policymakers have internalized Keynes’ insight and actively manage it.

You could also try breaking it up into:

  1. Supply shocks
  2. Demand shocks (whether via policy or shifts in “animal spirits”)
  3. Financial crises

The Fed can’t do it all

In this post, I want to clip together a few different threads about central banks and inflation–and specifically the idea of how you might fight inflation beyond interest rates.

In my new project, Nonrival (a full post about that later but sign up!), I covered the Bank of England this week.

I wrote:

Central banks, for all their faults, have mostly spent the last 15 years trying to do their jobs. Legislators have frequently made those jobs harder, including by embracing austerity in the 2010s. But central banks can only do so much—to boost an economy, or even to fight inflation. At some point investors start to ask: What about the other guys?

Today Ezra Klein goes in a similar direction:

Inflation is a scourge, but interest rates are a blunt tool… I’m more interested in another feature of the progressive consumption tax: the ability to dial it up and down to respond to different economic conditions. In a time of recession, we could drop taxes on new spending, giving the rich and poor alike more reason to spend. In times of inflation, we could raise taxes on new spending, particularly among the wealthy, giving them a concrete reason to cut back immediately and to save and invest more at the same time.

In August, Joe Stiglitz and Anton Korinek put out a whitepaper arguing that monetary policy was being asked to do too much in stabilizing the business cycle.

And at Vox, Emily Stewart interviews Nathan Tankus about whether the Fed is really able to tackle inflation on its own. (I don’t love the “is the Fed a scam” framing but there are some really interesting ideas covered here):

Part of our issue is that we haven’t had other agencies who have been explicitly given the task to think about inflation, to preemptively respond to problems that could potentially cause inflation… We’ve been sold this lie that we can give this one agency responsibility for managing the economy, inoculated from politics, and then it’ll work out best for everyone.

To me the idea here isn’t that you ask the Fed to do less. And my feeling is that while central banks have their problems, they’ve had a pretty good track record relative to other forms of policymaking for the last 15 years.

But it’d be nice if they got a little help. You can’t run an entire economy via monetary policy.

Randomization is good

Researchers used LinkedIn to study the economic power of “weak ties.” It’s a fascinating topic and you can read about the study here. The New York Times reported on the study with the headline “LinkedIn Ran Social Experiments on 20 Million Users Over Five Years.” To put it charitably, that’s a very weird reaction.

Stepping back for a second… People are rightly worried about the power large online platforms have to influence their behavior. That topic can head down a philosophical rabbit hole, as it invites questions about agency and free will. But it’s a perfectly reasonable thing to worry about: Code is a form of governance, and we ought to question the ways companies govern our online spaces.

But that’s happening all the time! Once you become concerned about companies influencing behavior you’ll see it all around you. There’s nothing special about an A/B test in this regard. Randomized experiences are just one way that influence might happen.

However, there’s something nefarious in the colloquial meaning of someone “experimenting on you.” That sounds especially bad. The thing is that the colloquial meaning doesn’t have to involve randomization: If Facebook just tweaks how the news feed works in order to get you to click more ads, that can be a scary form of influence–and meet the colloquial definition of “experimenting” on you–even if they don’t run a randomized control trial.

And then there’s the fact that in this case, the randomization is part of the sort of thing you might hope platforms would do: Run experiments in collaboration with researchers, who no doubt gave consideration to the ethical considerations involved, in order to create new knowledge.

Of all the ways platforms might influence us, this seems like one of the least nefarious. But the word “experiment” carries a weight that then can make this seem like a uniquely worrisome thing. Worry about the influence, not the academic experiments.

Side doors

“Well shit, we just need to blog more.”

The Verge has been redesigned. That’s from the editor-in-chief, Nilay Patel, who goes on to say:

When you embark on a project to totally reboot a giant site that makes a bunch of money, you inevitably get asked questions about conversion metrics and KPIs and other extremely boring vocabulary words. People will pop out of dark corners trying to start interminable conversations about “side doors,” and you will have to run away from them, screaming.

But there’s only one real goal here: The Verge should be fun to read, every time you open it. If we get that right, everything else will fall into place.

That stood out to me because I used to say my whole job was walking into rooms and explaining to people I worked with that the audience came in almost entirely through side doors. By that I meant mostly social media, and search engines, messaging apps, and emails you didn’t send. Almost no one came to your homepage, so if you wanted to attract readers you needed to understand side doors. This wasn’t novel on my part at all; it was conventional wisdom. But as a “digital” editor focused on social media and audience development I gave this pitch a lot.

And I ended by saying: You can’t change this. You can build the nicest homepage in the world but people still will mostly come through side doors.

But a lot of those side doors just turned out to be bad — for publishers but for readers and for public discourse, too. Algorithmic social media increasingly feels like a wrong turn. It’s not good for us.

So I’m glad to see The Verge trying to build its own feed and attract more readers directly. It probably was the case that, when I was talking about side doors in the mid 2010s, that most small- and medium-sized publishers couldn’t on their own do much to change those dynamics. It’s not usually a good idea to stand athwart audience habits yelling “Stop!”

But not every audience habit is a step in the right direction either, and even trends that seem inevitable can shift over a period of years.

Side doors are part of what makes the internet so great: You jump from one thing to the next, following a chain of links to places you might not have expected. But many of the side doors we’ve actually built are bad. I hope publishers like The Verge can convince more readers to try the front door for a change.

Some intellectual influences

I’ve been thinking about the perspectives and schools of thought that I came to in my formative years that, for better or worse, have shaped how I think about a wide range of things. I thought it’d be useful to sketch those out, if only for myself. They’re quite different from each other — some are schools of thought or intellectual subfields, some are hazy ideas that resist easy definition.

Two are subfields of economics: behavioral economics and the economics of ideas. One is a philosophical tradition: American pragmatism. And two are related to the internet: what I’m calling the “Berkman perspective,” a set of ideas from internet scholars in and around the Berkman-Klein Center; and the “wonkosphere,” the policy blogging world that thrived during the Obama years, of which I was a voracious reader.

Behavioral economics

I’ve been deeply influenced by both the popular writing and research on cognitive bias, decision making, and forecasting. In a nutshell, I take this work to suggest:

The economics of ideas

I’ve always been drawn to the topic of innovation and how it happens and have enjoyed a number of great books on this topic. In 2015 I had the privilege to audit a PhD course at MIT Sloan on the economics of ideas which helped me go deeper on this subject. In terms of what I’ve come to take away from this field:

  • New ideas are the driver of long-run economic growth, so few questions are as important as the question of how ideas are produced, aka how innovation happens
  • Ideas are an unusual kind of economic good because they’re nonrival. Some of the big assumptions that drive how we’d normally think about markets don’t apply
  • Macro-theoretical models of ideas-driven growth are useful but the economics of ideas is and should be a deeply empirical field: We should look at incentives and markets and property rights but also culture and institutions to understand how innovation happens

The Berkman view

This one was the hardest to name, but I remain deeply influenced by a set of scholars and writers who studied the internet in the early 2000s and 2010s. Many, though certainly not all, were associated at some point with the Berkman Klein Center for Internet & Society at Harvard. And I’m proud to have spent part of a semester studying AI ethics and governance there as an Assembly Fellow in 2019. If I were to sum up what I took from a range of thinkers, it’s this:

  • The internet was a big deal and worth studying. This seems obvious now, but it wasn’t obvious to many people even 15 years ago.
  • “Code is law,” meaning software can shape behavior. It can be a form of governance, limiting what we can do or pushing us toward a particular decision.
  • The internet could enable new and potentially better forms of cooperation and communication.

The “wonkosphere”

It’s hard to overstate how much I was influenced by the policy and economics blogging world of the late 2000s and early 2010s. In terms of what I took from it:

  • Argument produces knowledge. The back-and-forth within this world wasn’t just fun to read, it showed a way to think and debate and learn (not always good of course!)
  • The internet is an amazing tool for research. Reading the wonkosphere convinced me that a good faith reading of the massive quantity of high-quality information available online often produces better journalism than so-called “neutral” reporting
  • There is a technocratic approach to politics that, for better or worse, musters evidence and arguments about policy questions and is skeptical of overarching ideology as a way of choosing between policies

Pragmatism

This last one I’ll just turn over to the intro from the Stanford Encyclopedia of Philosophy:

Pragmatism is a philosophical tradition that – very broadly – understands knowing the world as inseparable from agency within it. This general idea has attracted a remarkably rich and at times contrary range of interpretations, including: that all philosophical concepts should be tested via scientific experimentation, that a claim is true if and only if it is useful (relatedly: if a philosophical theory does not contribute directly to social progress then it is not worth much), that experience consists in transacting with rather than representing nature, that articulate language rests on a deep bed of shared human practices that can never be fully ‘made explicit’.

Behavioral economics in one chart

It’s sometimes claimed, not entirely unreasonably, that the research on cognitive biases amounts to an unwieldy laundry list. Just look at how long the list of cognitive biases is on Wikipedia. This frustration is usually paired with some other criticism of the field: that results don’t replicate, or that there’s no underlying theory, or that it’s a mistake to benchmark the human brain against the so-called “rational ideal.”

I’m not very moved by these critiques; if the list of biases is long, it’s partly because the psychology of heuristics and biases is appropriately a very empirical field. An overarching theory would be nice, but only if it can explain the facts. The whole point of behavioral economics was to correct the fact that economists had let theory wander away from reality.

Still, I’ve been thinking recently about how to sum up the key findings of behavioral science. What’s the shortest possible summary of what we know about bias and decision making?

Enter Decision Leadership by Don Moore and Max Bazerman. They wrote the textbook on decision making, and in this book they offer advice on how to make good decisions across an organization. I’ve had the pleasure to work with them and I recommend the book.

What I want to share here isn’t their advice, but their succinct summary of decision biases, from the book’s appendix. It’s the best synthesis of the field I know of:

p. 196

Here’s a bit more.

The human mind, for all its miraculous powers, is not perfect. We do impressively well navigating a complex world but nevertheless fall short of the rational ideal. This is no surprise–perfect rationality assumes we have infinite cognitive processing capacity and complete preferences. Lacking these, we adapt by using shortcuts or simplifying heuristics to manage challenges that threaten to exceed our cognitive limitations. These heuristics serve us well much of the time, but they can lead to predictable errors. We group these errors into four types based on the heuristics that give rise to them.

p. 195

The availability heuristic

The first is the availability heuristic, which serves as an efficient way of dealing with our lack of omniscience. Since no one knows everything, we rely instead on the information that is available to us… Reliance on the availability heuristic gives recent and memorable events outsize influence in your likelihood judgments. After experiencing a dramatic event such as a burglary, a wildfire, a hurricane, or an earthquake, your interest in purchasing insurance to protect yourself is likely to go up… The availability heuristic leads us to exaggerate vivid, dramatic, or memorable risks–those we can easily retrieve from memory… The availability heuristic biases all of us toward overreliance on what we know or the data we have on hand. Sometimes information is easier to recall because it is emotionally vivid, but other information is privileged simply due to the quirks of human memory… You can reduce your vulnerability to the availability bias by asking yourself what information you would like to have in order to make a fully informed decision–and then go seek it out.

p. 195-198

The confirmation heuristic

The second is the confirmation heuristic, which simplifies the process by which we gather new information… One of the challenges that impedes us in seeking out the most useful evidence to inform our decisions is that confirmation is so much more natural than disconfirmation… Even our own brains are better at confirmation than disconfirmation. We are, as a rule, better at identifying the presence than the absence of something. Identifying who is missing from a group is more difficult than determining who is present. That has the dangerous consequence of making it easier to find evidence for whatever we’re looking for… Confirmation can bias our thought processes even when we are motivated to be accurate. It is even more powerful when ti serves a motivation to believe that we are good or virtuous or right. Virtue and sanctimony can align when we defend our group and its belief systems. The motivation to believe that our friends, leaders, and teachers are right can make it difficult to hear evidence that questions them… The automatic tendency to think first of information that confirms your expectations will make it easy for you to jump to conclusions. It will make it easy for you to become overconfident, too sure that the evidence supports your beliefs… If you want to make better decisions, remind yourself to ask what your critics or opponents would say about the same issue.

p. 195, 199-202

The representativeness heuristic

The third is the representativeness heuristic, which stands in for full understanding of cause and effect relationships. We make assumptions about what causes what, relying on the similarity between effects and their putative causes…

p. 195

This is a tricky one so I want to step outside the book for a second and supplement the definition above with the definition from the American Psychological Association:

representativeness heuristic: a strategy for making categorical judgments about a given person or target based on how closely the exemplar matches the typical or average member of the category. For example, given a choice of the two categories poet and accountant, people are likely to assign a person in unconventional clothes reading a poetry book to the former category; however, the much greater frequency of accountants in the population means that such a person is more likely to be an accountant.

The framing heuristic

Finally, framing helps us establish preferences, deciding how good or bad something is by comparing it to alternatives… Context frames our preferences in important ways. Frames drive our choices in ways that rational theory would not predict. We routinely behave as if we are risk averse when we consider choices about gains but flip to being risk seeping when we think about the same prospect as a loss. This reversal of risk preferences owes itself to the fact that we think about gains and losses relative to a reference point–usually the status quo or, in the case of investments, the purchase price… One [consequence] is the so-called endowment effect, our attachment ot the stuff we happen to have… The endowment effect can contribute to the status quo bias, which leads us to be irrationally attached to the existing endowment of possessions, privileges, and practices.

p. 195, 205-209

Summing it up

So, what’s the shortest version of behavioral economics and decision making? We are not perfectly rational and in particular we’re intuitively quite bad at statistical and probabilistic thinking. We rely heavily on the information we can easily recall, and we look for reasons to confirm what we already think–especially when doing so protects our self-conception or our group’s status. When we think about cause and effect, we rely on the information that we can easily recall, we don’t think carefully about probability and counterfactuals–and instead, we think in terms of stories and archetypes. We put things into categories and construct causal narratives based on the category. And we are creatures of context: Our frame of reference can sometimes shift based on what is made salient to us, and we are often especially attached to the status quo.

There are a lot of narrower biases within these four heuristics, and no doubt there’s plenty to quibble with in any specific taxonomy like this one. But in my book that’s a pretty decent starting point for summing up a wide set of empirical work, and it clearly helps explain a lot of how we think and how we decide.

A causal question

A good tweet:

Of course, putting this question to good use requires judgment. There are no iron rules for mapping a causal claim to a prediction about large-scale data in the messy real world. And there are always a million ways you can explain why the causal claim is still true even if the predicted real-world effect doesn’t turn up. But it’s still a great question. The point of causal analysis is, ultimately, to make claims about the wider world and how it behaves, not just to predict the outcomes of RCTs.

Forecasting update

In February, I recapped my track record as a forecaster, going back to 2015. I’m a bit more than halfway through my first season as a “Pro” on the INFER forecasting platform so I thought I’d post an update.

  • 64 questions have resolved. I’ve forecast on 7 of them.
  • Of users who’ve forecast on at least five resolved questions this season (the platform’s default leaderboard cutoff) I’m ranked 66 out of 280 (76th percentile). My percentile is basically the same if you include anyone who’s forecast on at least one resolved question.
  • For the all-time leaderboard, among anyone with five resolved questions since 2020, I’m currently 73 out of 558, (87th percentile).
  • My best performance year to date was on a forecast of venture capital. My worst on Intel’s Q2 revenue.

Prediction and resilience

In Radical Uncertainty, the authors make a common argument: Instead of trying to predict the future, you should prepare for a wide range of scenarios, including ones you can’t even fully articulate. You should become more robust or resilient:

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingincies, few of which can be described in any but the sketchiest of detail.

This might make sense as an argument against certain types of forecasting exercises that organizations undertake. But it seems to me to still clearly require accurate prediction.

They’re essentially shifting the necessary prediction from the likelihood of a specific future outcome (will there be a recession, for example) to a prediction of the causal effect of a certain action across a range of unspecified outcomes.

For example, one way to be more resilient as a company is to hold more cash. Cash keeps your options open, so instead of trying to predict exactly where the economy is going, hold more cash.

But does that get you away from predictions and forecasting?

Not as far as I can tell. You’re basically predicting that, all else equal, your firm will do better (by whatever metrics) if you increase your cash holdings than if you don’t. You’re making a prediction about a causal effect, rather than about a specific external scenario. Prediction, though, is still a key part of the enterprise.

In their book Prediction Machines, Agarwal, Gans, and Goldfarb define prediction as “using information you have to generate information you don’t have.” They’re using the word more broadly than just predicting the future, but that definition has value even if we add “about the future” to the end of it.

The art of decision making isn’t about trying to predict everything that might have an influence on you. It’s that you use the information you have to generate information you don’t have; you make the predictions that seem most useful, that have the highest payoff.

Sometimes that is straightforward forecasting of a scenario: Will there be a recession next year? Sometimes it’s predicting the effect of a choice you’re considering, ahead of time: Will we be more likely to succeed if we do X to improve our cash holdings?

The difference isn’t that one of these involves prediction and the other says prediction is impossible. Both are bets about the future. The difference is in their potential payoff. Does the information you have give you any purchase on the question? Maybe your information suggests that recession forecasting is borderline impossible but there’s good evidence on the causal effects of increasing cash holdings. You’re likely to make a better prediction in the latter case.

Use the information you have to generate information you don’t have. That takes time and effort — to collect information, to analyze it, to apply it to the decision at hand. The question when you’re considering the value of a forecast isn’t Can I avoid making predictions here? It’s Is the effort I’ll put into this prediction the best use of my time given what I’m trying to achieve?

The German labor market

A paper by an MIT economist explains its unique features…

Germany has less low-wage work and less inequality than the US, but more flexibility and lower unemployment than France.

Germany—the world’s fourth largest economy—has remained partially insulated from the growing labor market challenges faced by the United States and other high-income countries. In many advanced economies, the past few decades have seen sustained increases in earnings
inequality, a fall in the labor share, the disappearance of “good jobs” in manufacturing, the rise of precarious work, and a deterioration in the power of organized labor and individual workers. These developments threaten to prevent economic growth from translating into shared prosperity. Figure 1 shows that compared to the United States, German organized labor has remained strong. Half of German workers are covered by a collective bargaining agreement, compared to 6.1 percent of private-sector Americans (BLS, 2022). Trust in unions is almost twice as high in Germany compared to the US. Employees in Germany work fewer hours, the country’s
low-wage sector is 25 percent smaller, and labor’s share of national income is higher. The German manufacturing sector still makes up almost a quarter of GDP (compared to 12 percent in the US). Germany has one of the highest robot penetration rates in the world (IFR, 2017)—yet in contrast to the US (Acemoglu and Restrepo, 2020), robotization has not led to net employment declines in Germany, especially in areas with high union strength
(Dauth, Findeisen, Suedekum, and Woessner, 2021). At the same time, relative to other OECD countries—many of which, like France or Italy, have maintained even higher collective bargaining coverage through more rigid bargaining systems—the German labor market features low unemployment and high labor force participation (though also a larger low-wage sector).

Bargaining is mostly (but not entirely) at the sectoral level rather than the firm level.

The first pillar is the sectoral bargaining system. In Germany, unions and employer associations engage in bargaining at the industry-region level, leading to broader coverage than in the US. Meanwhile, partial decentralization of bargaining to the firm level—through flexibility provisions in sectoral agreements, or direct negotiations between individual firms and sectoral unions—gives firms space to adapt to changing circumstances. However, this flexibility has also resulted in a gradual erosion of bargaining coverage.

Workers have multiple channels to share their perspectives in firms’ decision making.

The second pillar of the German model is firm-level codetermination. Workers are integrated into corporate decision-making through membership on company boards and the formation of “works councils,” leading to ongoing cooperative dialogue between shareholders,
managers, and workers. Overall, the German model combines centralized “social partnership” between unions and employer associations at the industry-region level with decentralized mechanisms for local wage-setting, dialogue, and customization of employment conditions.

There’s a tradeoff between flexibility and collective bargaining. Germany’s balance between them has evolved over time.

A recurrent theme in our discussion of the German model will be a tension at the heart of the model: between firms’ flexibility and workers’ collective bargaining strength. Since the 1990s, the model has become more decentralized and flexible. This evolution has arguably contributed to reductions in unemployment and increases in economic growth, but has
entailed a substantial erosion of collective bargaining and works council coverage (as Figure 2 illustrates) and a weakening of bargaining agreements. This erosion may explain Germany’s slowly increasing—and perhaps underappreciated—exposure to the afflictions suffered by
other developed-world labor markets: rising wage inequality and the spread of low-wage, precarious jobs.