What will the new mixed economy look like?

“After the collapse of the Soviet Union in 1991, the 20th century’s ideological contest seemed over,” The Economist wrote in last week’s cover story on millennials and socialism. “Capitalism had won and socialism became a byword for economic failure and political oppression.” These sentences aren’t wrong, but they are misleading. It’s true that market-oriented economies fared better over the 20th century on various measures of human welfare than did centrally-planned ones. But they did so largely by abandoning their commitment to laissez-faire capitalism and inventing something new: the mixed economy.

Before World War II, public spending on social services was virtually nonexistent in OECD countries. In the post-war years it exploded and today averages just over 20% of GDP. In the middle of the 20th century, governments in these economies began providing health insurance, public education, retirement support, and more. In aggregate, these policies not only didn’t get in the way of economic growth; they likely increased it. And they enabled the so-called “capitalist” countries to deliver not just better economic outcomes than centrally-planned ones but also longer, healthier, and more-educated lives for their citizens. The mixed economy was the original “third way,” before that term came to be associated with centrist neoliberalism.

Today, as The Economist notes, “Socialism is storming back because it has formed an incisive critique of what has gone wrong in Western societies.” But the challenge of the 21st century, like its predecessor, is not about capitalism vs. socialism. It is about creating a new kind of mixed economy.

(More here and here.)

Crowds and replicability, again

More evidence that prediction markets can anticipate which studies will replicate. My previous posts on this idea are here and here.

H/t to the Vox Future Perfect newsletter, which discussed what this implies for journalism. I have thoughts. Short-version: the interpretive turn in journalism applies to research coverage, too. You don’t just report the findings; you interpret them. Recall that The New York Times said it “should continue to employ a healthy mix of newshounds, wordsmiths and analysts.” Emphasis mine. Under the right conditions, it’s reasonable to think that the best analytical journalists will outperform at least the average academic for reasons explained in this post and this one.

Analysis vs. science

The work of an analyst and the work of a scientist have some things in common. They’re both fundamentally truth-seeking endeavors, and they both rely on versions of the scientific method. But they’re also quite different. What explains those differences? Among other things, I’d venture it’s that analysis is designed to maximize truth-seeking in the near-term while science is set up to maximize it in the long-term.*

Take, for example, how an analyst and a scientist might set their bar for the quality of evidence. A scientist might say that for a finding to be taken seriously it needs to employ some plausible method of establishing causality, to have gone through peer review, etc. (These standards will vary by field.) Of course, the scientist wouldn’t say that evidence that didn’t meet those requirements was worthless. But they likely would treat these other sources of evidence as inputs and inspiration for more rigorous research that does employ the highest standards of their field. And when they assess the state of knowledge on a subject they’re more likely to emphasize what’s been established using that higher level of rigor.

The scientist’s norms are set up to encourage the steady progress toward truth over the long-term, which means gradually adding high-quality evidence and understanding to a field of knowledge. They can afford to treat less-rigorous evidence as input and inspiration because they’re focused on that long-term progress.

The analyst, by contrast, might have a far lower bar for rigor and might sample from a wider base of evidence. That’s because the analyst’s norms are set by the need to reach the best possible answer quickly, which often means reaching a conclusion in the face of scant or highly imperfect evidence. Analysis is a skill of its own.

One interesting example is the fact that Wall Street analysts and policymakers shied away from DSGE models of the macroeconomy even as they became popular within much of academic economics. The appeal of these models was (supposedly) that they improved on serious theoretical shortcomings of previous models and that they did a better job of connecting macro thinking to the economy’s micro-foundations.** You can see how these things would appeal to scientists, in the abstract at least. Over time, a science needs to improve its theories by improving their coherence, their ability to track reality, and their connection to other branches of science.

Macroeconomic analysts, meantime, were preoccupied by the near-term.***  They wanted to know what would happen, and how it would be affected by all sorts of variables — and they wanted the best available understanding now. The older class of models turned out to be better at this.

One implication of this is that scientists won’t always be the best guides to the empirical side of policymaking. Yes, they’re deeply informed and they often do put their “analyst” hat on when advising policymakers. But the skills they develop as scientists (researchers) are subtly distinct from the skills that analysts develop. Policymakers often need the best answer now, and that’s not always the same as the best answer that science has to give.

*Yes, Kuhn, paradigms — I know. But pragmatically speaking this still holds as at least one useful way to think about science.

**I think this holds whether or not you think the DSGE models were ultimately a misstep for macroeconomics as a field.

***This sometimes gets treated merely as the difference between “prediction” and “explanation” but that’s incomplete.

Cass Sunstein on political expressivism

From The Cost-Benefit Revolution:

Arguments about public policy are often expressive. People focus on what they see as the underlying values. They use simple cues. They favor initiatives that reflect the values that they embrace or even their conception of their identity. If the issue involves the environment, many people are automatically drawn to aggressive regulation, and many others are automatically opposed to it. When you think about regulation in general, you might ask: What side are you on? That question might incline you to enthusiastic support of, for example, greater controls on banks or polluters–or it might incline you to fierce opposition toward those who seek to strengthen the government’s hand.

In this light, it is tempting to think that the issues that divide people are fundamentally about values rather than facts. If so, it is no wonder that we have a hard time overcoming those divisions. If people’s deepest values are at stake, and if they really differ, then reaching agreement or finding productive solutions will be difficult or perhaps impossible. Skepticism about experts and expertise–about science and economics–is often founded, I suggest, on expressivism.

As an alternative to expressive approaches, I will explore and celebrate the cost-benefit revolution, which focuses on actual consequences–on what policies would achieve–and which places a premium on two things: science and economics.

It’s interesting to think about how the two American political parties might react to this — see here and here — and how that might change in the coming years.

Network Propaganda: Institutions and technology

I highly recommend the book Network Propaganda. I’ve written recently about institutions and technology, so wanted to highlight this bit from the end of that book:

Our study suggests that we should focus on the structural, not the novel; on the long-term dynamic between institutions, culture, and technology, not only the disruptive technological moment; and on the interaction between the different media and technologies that make up a society’s media ecosystem, not on a single medium, like the internet, much less a single platform like Facebook or Twitter. The stark differences we observe between the insular right-wing media ecosystem and the majority of the American media environment, and the ways in which open web publications, social media, television, and radio all interacted to produce these differences suggest that the narrower focus will lead to systematically erroneous predictions and diagnoses.

 

Redistribution vs. predistribution

There’s an ongoing debate in left-of-center policy circles between redistribution and “predistribution”, and it’s often framed like this:

The redistribution camp wants to let the market do its thing and then if (when) it creates winners and losers that camp suggests we should redistribute those gains using the tax system and various kinds of spending programs. Doing so would mean getting the best of both worlds, the thinking goes: the efficiency of the market with the welfare gains of redistribution.

The predistribution camp raises a number of objections to this and suggests that instead policy should directly aim to transform the market so that its outcomes are more just, even before taxes and transfers. The predistribution camp has a few different arguments; my list is certainly not comprehensive:

  1. Political economy: a society full of rich people will fight tooth and nail against redistribution.
  2. Fairness, dignity, and trust: People don’t want to be marked as an economic “loser” and then compensated for it. They want to feel they’ve earned a good living and they will lose trust in a system that doesn’t offer them that.
  3. Growth: Markets are highly imperfect and we’re actually leaving growth on the table when we sit back and leave things to “the market”, which is itself highly contingent.

These are good critiques, and I’m quite open to lots of predistribution ideas. The easy response is to say we need some of both.

But I do think this debate can sometimes misunderstand the role of redistribution. Specifically, it misses the sense in which today’s redistribution is tomorrow’s predistribution.

To see what I mean it’s easier to start in a very different context: cash transfers in developing countries. I have no strong view on the efficacy of cash transfers relative to other development interventions; my point is just to note that the argument for cash transfers in this context is that they spark economic development. Here’s economist Chris Blattman, who studies this, in 2013 (if you want more recent information on the effectiveness of cash transfers for development try here):

So how do you create “good” jobs and productive work? Another way of asking this question is “what is holding young people back?” or “what constrains them?”…

More and more, economists think that the real constraint is capital. Studies show that the poor, on average, have high-earning opportunities if they get a little cash or equipment. Studies with existing farmers or businesspeople have seen returns of 40 to 80% a year on cash grants.

This gels with economic theory, which says that infusions of capital should expand people’s choice of occupations, self-employment, and earnings. People can’t get access to that capital through loans because credit markets are so broken and expensive. This can be a development trap, or at the very least a drag on growth.

Redistribution in the U.S. is taking place in a very different context, of course. What’s true in developing countries might not be in developed ones. But when we talk about redistribution’s merits (or lack thereof) in the U.S. we ought to consider its ability to change economic fundamentals. And we have evidence that this sort of thing happens. Welfare state programs — including transfers like food stamps — increase entrepreneurship by helping to de-risk it. Likewise, Raj Chetty and colleagues have demonstrated that the combination of school performance and parents’ income predict the likelihood that a child will go on to file a patent. The “Einstein gap” that Chetty points to is an income gap. It’s at least plausible (if not likely) that if you substantially raised the incomes of parents, their kids’ likelihood of being a (well-compensated) inventor would increase dramatically. Today’s redistribution is tomorrow’s predistribution.

Of course, most left-of-center policy people get all this. And politicians in particular are keen not to pitch redistribution programs simply as static subsidies. Several 2020 primary candidates are doing a good job of proposing redistributive policy ideas in the context of a broader economic transformation. It’s when the more philosophical redistribution vs. predistribution debates get going that the problematic framing I’ve described typically seems to occur.

For example, when we frame the basic income as simply a way to subsidize a permanent underclass in the face of automation or when we treat the welfare state as merely a way to placate those who’ve lost the lottery of winner-take-all capitalism we miss at least part of redistribution’s potential. Transferring money from richer to poorer people changes the economy in important ways, and that’s a big part of its appeal.

That doesn’t mean we should rely solely on redistribution and neglect predistributionist ideas. Many of those ideas are great! And the predistributionists are right to have drawn the lesson that we’ve been too deferential to market outcomes, as if they were delivered from on high. Furthermore, none of what I’ve said addresses the political economy critique that certain market outcomes make certain policy changes harder.

Nonetheless, redistribution should be an important part of any economic policy agenda. And it shouldn’t be framed as simply a static transfer from “winners” to “losers” because that’s not what it is.