What’s the economic impact of venture capital?

In my post on the case for technology I cited a 2011 paper that found that an increase in venture capital funding in a city was associated with higher income, higher employment, and more new firms in that place.

In this post, I want to clip together a few resources on what we know about VC’s economic and social impact.

Here’s the conclusion of that paper linked above, on VCs and geography:

We find that increases in the supply of venture capital in an MSA stimulate the production of new firms in the region. This effect appears consistent with either of two mechanisms. First, would-be entrepreneurs in need of capital may incorporate the availability of such capital into their calculations when trying to decide whether to start their firms. Second, the firms that VC firms finance may serve as inspiration and training grounds for future entrepreneurs. We further find that an expanded supply of venture capital raises employment and aggregate income in a region. At least some of these employment and income effects probably stem from venture capital allowing entrepreneurs to create value by pursuing ideas that they otherwise could not have. Table 10 summarizes the magnitudes of these estimated effects across our various specifications.

Here’s a bit from a 2000 paper in the Journal of Economic Perspectives:

After addressing these causality concerns, the results suggest that venture funding does have a strong positive impact on innovation. The estimated coefficients vary according to the techniques employed, but on average, a dollar of venture capital appears to be three to four times more potent in stimulating patenting than a dollar of traditional corporate R&D. The estimates therefore suggest that venture capital, even though it averaged less than 3 percent of corporate R&D from 1983 to 1992, is responsible for perhaps 10 percent of U.S. industrial innovations in this decade

And here are several snippets from a 2011 NBER literature review:

Overall we believe that the body of empirical evidence is consistent with the notion that VCs select more innovative companies, and then help them with the commercialization process. The results suggest that VC plays a greater role for commercialization (as measured by bringing products to market, and forging strategic alliances) than for the generation of further innovation (as measured by patents and TFP)…

Company-level studies typically confirm this positive relationship between VC and measures of economic growth. Puri and Zarutskie (2011), using US Census data, find that only 0.11% of new companies created over a 25 year sample period from 1981-2005 are funded by VC, yet these companies account for 4% to 5.5% of employment. They show that VC-backed companies grow faster at every stage of the investment cycle, i.e., both before and after the receipt of VC. Chemmanur et al. (2011a) find a positive effect of VC on company productivity. Davila et al. (2003) and Engel and Keilbach (2007) also find a positive effect of VC on employment.

Overall the literature consistently finds a positive relationship between VC funding and other measures of economic value creation. While the literature seems to identify social value creation, there remains an open question on the social costs of VC.

And one more recommended paper.

What will the new mixed economy look like?

“After the collapse of the Soviet Union in 1991, the 20th century’s ideological contest seemed over,” The Economist wrote in last week’s cover story on millennials and socialism. “Capitalism had won and socialism became a byword for economic failure and political oppression.” These sentences aren’t wrong, but they are misleading. It’s true that market-oriented economies fared better over the 20th century on various measures of human welfare than did centrally-planned ones. But they did so largely by abandoning their commitment to laissez-faire capitalism and inventing something new: the mixed economy.

Before World War II, public spending on social services was virtually nonexistent in OECD countries. In the post-war years it exploded and today averages just over 20% of GDP. In the middle of the 20th century, governments in these economies began providing health insurance, public education, retirement support, and more. In aggregate, these policies not only didn’t get in the way of economic growth; they likely increased it. And they enabled the so-called “capitalist” countries to deliver not just better economic outcomes than centrally-planned ones but also longer, healthier, and more-educated lives for their citizens. The mixed economy was the original “third way,” before that term came to be associated with centrist neoliberalism.

Today, as The Economist notes, “Socialism is storming back because it has formed an incisive critique of what has gone wrong in Western societies.” But the challenge of the 21st century, like its predecessor, is not about capitalism vs. socialism. It is about creating a new kind of mixed economy.

(More here and here.)

Crowds and replicability, again

More evidence that prediction markets can anticipate which studies will replicate. My previous posts on this idea are here and here.

H/t to the Vox Future Perfect newsletter, which discussed what this implies for journalism. I have thoughts. Short-version: the interpretive turn in journalism applies to research coverage, too. You don’t just report the findings; you interpret them. Recall that The New York Times said it “should continue to employ a healthy mix of newshounds, wordsmiths and analysts.” Emphasis mine. Under the right conditions, it’s reasonable to think that the best analytical journalists will outperform at least the average academic for reasons explained in this post and this one.

Analysis vs. science

The work of an analyst and the work of a scientist have some things in common. They’re both fundamentally truth-seeking endeavors, and they both rely on versions of the scientific method. But they’re also quite different. What explains those differences? Among other things, I’d venture it’s that analysis is designed to maximize truth-seeking in the near-term while science is set up to maximize it in the long-term.*

Take, for example, how an analyst and a scientist might set their bar for the quality of evidence. A scientist might say that for a finding to be taken seriously it needs to employ some plausible method of establishing causality, to have gone through peer review, etc. (These standards will vary by field.) Of course, the scientist wouldn’t say that evidence that didn’t meet those requirements was worthless. But they likely would treat these other sources of evidence as inputs and inspiration for more rigorous research that does employ the highest standards of their field. And when they assess the state of knowledge on a subject they’re more likely to emphasize what’s been established using that higher level of rigor.

The scientist’s norms are set up to encourage the steady progress toward truth over the long-term, which means gradually adding high-quality evidence and understanding to a field of knowledge. They can afford to treat less-rigorous evidence as input and inspiration because they’re focused on that long-term progress.

The analyst, by contrast, might have a far lower bar for rigor and might sample from a wider base of evidence. That’s because the analyst’s norms are set by the need to reach the best possible answer quickly, which often means reaching a conclusion in the face of scant or highly imperfect evidence. Analysis is a skill of its own.

One interesting example is the fact that Wall Street analysts and policymakers shied away from DSGE models of the macroeconomy even as they became popular within much of academic economics. The appeal of these models was (supposedly) that they improved on serious theoretical shortcomings of previous models and that they did a better job of connecting macro thinking to the economy’s micro-foundations.** You can see how these things would appeal to scientists, in the abstract at least. Over time, a science needs to improve its theories by improving their coherence, their ability to track reality, and their connection to other branches of science.

Macroeconomic analysts, meantime, were preoccupied by the near-term.***  They wanted to know what would happen, and how it would be affected by all sorts of variables — and they wanted the best available understanding now. The older class of models turned out to be better at this.

One implication of this is that scientists won’t always be the best guides to the empirical side of policymaking. Yes, they’re deeply informed and they often do put their “analyst” hat on when advising policymakers. But the skills they develop as scientists (researchers) are subtly distinct from the skills that analysts develop. Policymakers often need the best answer now, and that’s not always the same as the best answer that science has to give.

*Yes, Kuhn, paradigms — I know. But pragmatically speaking this still holds as at least one useful way to think about science.

**I think this holds whether or not you think the DSGE models were ultimately a misstep for macroeconomics as a field.

***This sometimes gets treated merely as the difference between “prediction” and “explanation” but that’s incomplete.

Cass Sunstein on political expressivism

From The Cost-Benefit Revolution:

Arguments about public policy are often expressive. People focus on what they see as the underlying values. They use simple cues. They favor initiatives that reflect the values that they embrace or even their conception of their identity. If the issue involves the environment, many people are automatically drawn to aggressive regulation, and many others are automatically opposed to it. When you think about regulation in general, you might ask: What side are you on? That question might incline you to enthusiastic support of, for example, greater controls on banks or polluters–or it might incline you to fierce opposition toward those who seek to strengthen the government’s hand.

In this light, it is tempting to think that the issues that divide people are fundamentally about values rather than facts. If so, it is no wonder that we have a hard time overcoming those divisions. If people’s deepest values are at stake, and if they really differ, then reaching agreement or finding productive solutions will be difficult or perhaps impossible. Skepticism about experts and expertise–about science and economics–is often founded, I suggest, on expressivism.

As an alternative to expressive approaches, I will explore and celebrate the cost-benefit revolution, which focuses on actual consequences–on what policies would achieve–and which places a premium on two things: science and economics.

It’s interesting to think about how the two American political parties might react to this — see here and here — and how that might change in the coming years.

Network Propaganda: Institutions and technology

I highly recommend the book Network Propaganda. I’ve written recently about institutions and technology, so wanted to highlight this bit from the end of that book:

Our study suggests that we should focus on the structural, not the novel; on the long-term dynamic between institutions, culture, and technology, not only the disruptive technological moment; and on the interaction between the different media and technologies that make up a society’s media ecosystem, not on a single medium, like the internet, much less a single platform like Facebook or Twitter. The stark differences we observe between the insular right-wing media ecosystem and the majority of the American media environment, and the ways in which open web publications, social media, television, and radio all interacted to produce these differences suggest that the narrower focus will lead to systematically erroneous predictions and diagnoses.