The case for technology

I tend to be optimistic about technology, and so I’m often asked by friends… Um, why? I haven’t always answered that question as well as I would have liked and so I wanted to try to do so here. To make the case for technology as a driver of human welfare, I’ll start with the macro-historical view. I’ll suggest that the world is improving and that technology appears to be playing a central role. Then I’ll get into why economists expect technology to be a key driver of economic growth, in line with the broader historical picture. Next, I’ll present some more direct, plausibly causal evidence for technology’s benefits. Finally, I’ll discuss the role of institutions, culture, and policy. And I’ll end with a short case study.

The big picture

 

life-expectancy-globally-since-1770.png

Human welfare has improved in any number of ways over the past two hundred years, whether measured through life expectancy, GDP per capita, homicide rates, or any number of other variables. (I’m borrowing charts from the excellent site OurWorldInData.) Not everything is improving, of course, and this sort of progress can’t be cause for complacency. But it is real nonetheless.

As the economic historian Robert Gordon writes, “A newborn child in 1820 entered a world that was almost medieval: a dim world lit by candlelight, in which folk remedies treated health problems and in which travel was no faster than that possible by hoof or sail.” But things changed, not so much gradually as all at once. “Some measures of progress are subjective,” Gordon continues, “but lengthened life expectancy and the conquest of infant mortality are solid quantitative indicators of the advances made over the special century [1870-1970] in the realms of medicine and public health. Public waterworks not only revolutionized the daily routine of the housewife but also protected every family against waterborne diseases. The development of anesthetics in the late nineteenth century made the gruesome pain of amputations a thing of the past, and the invention of the antiseptic surgery cleaned up the squalor of the nineteenth-century hospital.”

GDP-per-capita-in-the-uk-since-1270.png

What changed? The short answer is the industrial revolution. A series of what Gordon calls “Great Inventions,” like the railroad, the steamship, and the telegraph set off this transformation. Electricity and the internal combustion engine continued it. And though these “Great Inventions” were perhaps most central, countless other technologies made life better in this period. The mason jar helped store food at home; refrigeration transformed food production and consumption; the radio changed the way people received information.

Gordon’s book The Rise and  Fall of American Growth, from which I’m quoting, is rich with detail and data and well worth a read. Its conclusion, and my point here, is that the rapid rise in living standards over the past two hundred years is directly linked to new technologies. Technology isn’t the only thing that has driven progress, of course. More inclusive political institutions have obviously driven tremendous progress, too. But technology is a central part of progress, and without it our potential to improve human welfare would be more limited.

The theory

Technology has long played a central role in economic theory. How much an economy can produce depends in part on the availability of inputs like workers, raw materials, or buildings. But what determines how effectively these inputs can be combined — how much the workers can produce given a certain amount of resources and equipment? The answer is technology, and for a long time economists thought of it as outside the bounds of their models. Technology was this extra “exogenous” thing. It was “manna from heaven” — critical to explaining economic growth but not itself explained by economic models. As economic historian Joel Mokyr wrote in 1990, “All work on economic growth recognizes the existence of a ‘residual,’ a part of economic growth that cannot be explained by more capital or more labor… Technological change seems a natural candidate to explain this residual and has sometimes been equated with it forthwith.”

But around the time he was writing that, economists’ theory of technology was starting to change. Paul Romer, among others, started publishing models of economic growth that more directly accounted for technology. In these models, “ideas” were the source of economic growth, and the growth of ideas depended in part on how many people went into the “ideas producing” sector, sometimes called R&D. In 2018, Romer won the Nobel Prize in economics for this work. David Warsh’s book Knowledge and the Wealth of Nations is a wonderful read on this shift in growth theory.

My point here is simply to note that economic theory suggests that for sustained economic growth to happen, we need a steady stream of new ideas and new technologies. A theory is not evidence per se, but it fits with the point of this essay: if we want to improve living standards over time, technology will likely be important.

The evidence

If both the broad historical picture and economic theory support the idea that technology is essential for rising living standards, what about more micro-evidence? One wonderful study of this, written about by my former colleague Tim Sullivan, measured subscriptions to Diderot’s Encyclopedie, to measure how the spread of technical knowledge effected economic growth in Enlightenment-era Europe:

 “Subscriber density to the Encyclopedie is an important predictor of city growth after the onset of industrialization in any given city in mid-18th century France.” That is, if you had a lot of smarty pants interested in the mechanical arts in your city in the late 18th century (as revealed by their propensity to subscribe to the Encyclopedie), you were much more likely to grow faster later on. Those early adopters of technology – let’s call them entrepreneurs, or maybe even founders – helped drive overall economic vitality. Other measures like literacy rates, by contrast, did not predict future growth. Why? The authors hypothesize that these early adopters used their newly acquired knowledge to build technologically based businesses that drove regional prosperity.

Another study of U.S. inventors from 1880 to 1940 links patenting to GDP at the state level. Yet another links a city’s innovation-oriented startups to its future economic growth. Another paper confirms that the rapid economic growth in the 1990s was due to technical change. This one links venture capital funding to economic growth. I could go on.

Policy, institutions, and culture

People often say that technology is a tool, and so neither inherently good or bad. That’s true enough, but what I’m arguing is that it’s an essential part of progress. If we want to improve human welfare, using technology well is going to be a big part of that at least in the long run.

Whether technology improves human welfare depends on a lot of things, including policy, institutions, and culture. Economists Daron Acemoglu, Simon Johnson, and James Robinson write:

Economic institutions matter for economic growth because they shape the incentives of key economic actors in society, in particular, they influence investments in physical and human capital and technology, and the organization of production. Although cultural and geographical factors may also matter for economic performance, differences in economic institutions are the major source of cross-country differences in economic growth and prosperity.

In terms of policy, Gordon does a good job explaining how regulations around food quality helped improve welfare, limiting one of the major downsides to the mass production of food. Likewise, regulation was essential to the spread of the electric light, again to limit its downsides in the form of accidents. Mokyr has written extensively on the role of culture in promoting innovation and growth.

Being good at technology — being a society that harnesses it well — depends on much more than technical progress. But that’s part of what I’m arguing for when I lay out the optimistic case for technology. My hope is not just that we’ll blindly embrace new tech, but that we’ll build reliable, trustworthy institutions, create a culture that embraces innovation but acknowledges its risks, and regulate technology wisely with an eye towards both its benefits and its costs.

The electric light

Light used to be fabulously expensive.

the-price-for-lighting-per-million-lumen-hours-in-the-uk-in-british-pound.png

Over time, though, technology changed that. Humans gained control over their environment, opening up new possibilities in terms of how we work, how we entertain ourselves, the communities we live in, and more. I’ve written a lot about the electric light, based in large part on the book Age of Edison. And I see in that story the big points I’ve laid out here. The chart above gives the macro-historical story of light. It used to be wildly expensive, and now it’s something most people at least in developed countries can afford. It’s clear that it transformed societies for the better:

The benefits of electrical power seemed widely democratized. By the early twentieth century, all American town dwellers could enjoy some of the pleasure and convenience of an electrified nightlife and a brighter workplace, while domestic lighting was coming within reach of many middle-class consumers and a growing number of urban workers… In this respect, what distinguished the late nineteenth century technological revolution was not its creation of vast private wealth but the remarkable way its benefits extended to so many citizens. The modern industrial system built enormous factories for some but also served a more democratic purpose, improving ‘the average of human happiness’ by providing mundane comforts to the multitude. (Age of Edison 234-235)

But culture, institutions, and policy all mattered. Electric light caused accidents and required regulation. It created new opportunities for capitalists to exploit workers. It contributed to a growing urban-rural divide. The answer, I think, quite clearly is not to denounce the electric light or to roll the clock back to gas lighting. Rather it’s to acknowledge that maximizing the electric light’s benefits required more than technical change. America and the world were better off for that invention, but making the most of it required new rules and norms.

The future

Robert Gordon is skeptical about information technology, in terms of its potential to replicate the benefits of the industrial revolution. And in recent years key areas of the internet have not turned out well; I’m thinking of social media. Why be optimistic? Partly, I simply don’t see an alternative. I’ve argued that technology is one of the major forces for human progress and so without it the scope for improvements to human welfare is significantly diminished. But partly I’m optimistic because regulation, culture, and institutions can help make IT and the internet better. They can help us maximize the benefits we receive from them (which are already substantial). I have some thoughts as to what that might look like but the history of technology suggests that to get the most from any new invention requires participation from all of society. We need inventors, surely, and entrepreneurs. But we need critics, too, as well as politicians and regulators and activists. We need people to recognize the potential and the risks simultaneously, rather than focusing only on one or the other. What we need to make the most of technology is a well-functioning democracy.

Update: A literature review on technological innovation and economic growth.

Who uses social media well?

There is a sort of elitism that attaches itself to every kind of media. TV is an opiate of the masses — unless you watch the sort of prestige stuff that’s well-reviewed in The New Yorker. There’s a version of that for social media, too. Most people are wasting time, the thinking goes, but I’m using it for important stuff. Contrast that with this bit from Tyler Cowen, part of his predictions for the next 20 years:

Social media has become a kind of opiate of the intellectual class. So, grandparents use social media to track what their grandkids are doing — that’s nice and wonderful. But people who keep on refreshing Twitter for the latest developments in the Mueller investigation — frankly, I think it’s a big waste of time. I think there has been great wrongdoing. I fully support what Mueller is up to. But, at the end of the day, following it moment-to-moment is a kind of trap.

This is almost the opposite view. The everyday usage of social media might actually be good for you, whereas the intellectualized version of it may be terrible!

Now, I don’t think either half of that view is quite right. If most people were using social media well and its ill effects were just a problem for intellectuals, you wouldn’t see the broader evidence that it’s making people less happy.

Moreover, I suspect Cowen would agree that there is an even more intellectualized way of using social media that can be quite good for you: using it to become a wiser and more productive consumer of information, in the spirit of Cowen’s book Create Your Own Economy. For instance, try redoing your Twitter feed to follow mostly academics — not just the ones who double as public intellectuals — and watch how things change.

So, there are better and worse ways of using social media — that much is obvious. What I like about Cowen’s line is that it reminds us that intellectuals and journalists aren’t immune from tremendously unproductive social media habits. If they want to get more from social media, they ought to rethink what they’re using it for.

But when you step back and do that rethinking, I suspect it still leads you to less social media overall. Yes, redoing your Twitter feed might help. But at that point why not spend more time on Coursera or listening to podcasts? Why not go back to your RSS reader and follow dozens of great blogs? Why not invent some other kind of internet/media product entirely? The more you think about what you like best about social media, the more you remember the other great stuff the rest of the internet can do for you.

Taxes, innovation, and value

Everything that follows is even more provisional than usual…

Krugman has a good piece making the case for much higher income taxes for the very rich. John Cochrane has a rebuttal. He points to this paper by Charles Jones on innovation and top tax rates:

When the creation of ideas is the ultimate source of economic growth, this force sharply constrains both revenue-maximizing and welfare-maximizing top tax rates. For example, for extreme parameter values, maximizing the welfare of the middle class requires a negative top tax rate: the higher income that results from the subsidy to innovation more than makes up for the lost redistribution. More generally, the calibrated model suggests that incorporating ideas and economic growth cuts the optimal top marginal tax rate substantially relative to the basic Saez calculation

I haven’t looked at the paper closely, hence the caveat at the top. I’ve posted about taxes and innovation here, and about taxes and growth here.

Anyway, my quick thought reading these posts and the abstract and summary of Jones’ paper is that we perhaps spend too little effort distinguishing innovative activities from everything else. This idea is discussed in Mariana Mazzucato’s The Value of Everything, which I just started. Here’s a bit from one review:

When value extraction is masquerading as value creation, we can end up praising and rewarding non-productive activities while ignoring productive sectors. As a result, GDP rises although an economy does not make anything new and people do not feel better off. Prosperity is thus concentrated in the hands of the rich few, and inequality tends to rise. If it is a value creator that deserves a higher proportion of national income, it is now time to reopen the debate about the ‘value of everything’.

In the same vein, here’s what I wrote in a post last year:

When we talk about “creating value” in this context, it’s not just about financial value; it’s really shorthand for organizing resources in a useful way to achieve some social goal. An organization’s mission is supposed to be the ambition; profits are supposed to be the incentive; and, at least in a competitive market, innovation is the way you get it done.

Meanwhile, keeping capital gains taxes low, for instance, is a pretty inefficient way to encourage innovation.

All of which is to say, we can argue about the ideal optimal top tax rate. But what if we did a better job making it profitable to innovate and not profitable to do things that don’t create actual social value — or even destroy it? And yes, tax avoidance is a thing. So putting all of this on tax policy is probably a bad idea. But in our broader economic policy discussions, we could do a better job of differentiating what adds value and what doesn’t.

Changing your mind

Over the past year or so, I kept a list of articles where I noticed someone admitting they’d changed their mind about something, or discussing the idea of changing your mind in general. I wrote this week about how I was wrong about paywalls. So I figured I’d put the various links I found here on the blog, too. Here goes, in no particular order:

Tyler Cowen, on what it would take to convince him to support net neutrality, a position he once supported but no longer does:

Keep in mind, I’ve favored net neutrality for most of my history as a blogger.  You really could change my mind back to that stance.  Here is what you should do…

Akshat Rathi in Quartz, on coming to appreciate the merits of so-called “clean coal”:

As I began to report on the technology, it became clear I hadn’t looked beyond my own  information bubble and may have been overtly suspicious of carbon-capture technology. By meeting dispassionate experts and visiting sites, for the first time, I began to grasp the enormity of the environmental challenge facing us and to look at the problem in a new light.

David Brooks on a new book by Alan Jacobs called How to Think:

Jacobs mentions that at the Yale Political Union members are admired if they can point to a time when a debate totally changed their mind on something. That means they take evidence seriously; that means they can enter into another’s mind-set. It means they treat debate as a learning exercise and not just as a means to victory.

How many public institutions celebrate these virtues? The U.S. Senate? Most TV talk shows? Even the universities?

Conor Friedersdorf on taking on the best form of an argument you disagree with, and quoting Chana Messinger:

And America would benefit if our culture of argument elevated the opposite approach, steel-manning, “the art of addressing the best form of the other person’s argument, even if it’s not the one they presented.” … In short, she says, “Think more deeply than you’re being asked to.”

Tyler Cowen, speaking on the longform podcast:

“I think of my central contribution, or what I’m trying to have it be, is teaching people to think of counter arguments. I’m trying to teach a method: always push things one step further. What if, under what conditions, what would make this wrong?”

A study by Alison Gopnik and co-authors on becoming more close-minded as we get older:

As they grow older learners are less flexible: they are less likely to adopt an initially unfamiliar hypothesis that is consistent with new evidence. Instead, learners prefer a familiar hypothesis that is less consistent with the evidence. In the social domain, both preschoolers and adolescents are actually the most flexible learners, adopting an unusual hypothesis more easily than either 6-y-olds or adults.

Chadwich Matlin: I was a meditation skeptic until I tried to make my case. At FiveThirtyEight:

Skepticism is a FiveThirtyEight staffer’s currency. The only mantras we chant around the office are: Wait for the evidence; wonder if the evidence has something wrong with it; trust the good evidence only until better evidence comes along.2 I was especially distrustful because mindfulness and meditation have been having a moment — meditation apps occupy some of the top spots on the App Store’s rankings of most popular health and fitness apps; Anderson Cooper has profiled the merits of mindfulness on “60 Minutes”; mindfulness is being used in schools as a way to help manage classrooms. Given the hype and this publication’s natural aversion to health trends, I figured I was safe disregarding my therapist’s big claims.

But as FiveThirtyEight’s science team assembled the junk science we wanted to shed in 2018, I started to wonder whether mindfulness really was bunk. So I dove into the scientific literature and discovered I was wrong: There is some limited evidence to suggest that meditation might help with some ailments and may produce measurable changes in the brain. It’s no miracle cure, and there’s still a lot of science left to do, especially about the kind of casual meditation people may fit into a busy day.

Julia Galef: Not all disagreements are opportunities to change your mind.

This is kind of a funny thing for me to be pushing back on, since I so often write and speak about the virtues of trying to change your own mind. But I want to push back on it anyway. I think that “trying to change your mind” is a great goal we should be striving for, but that most debates have a pretty low probability of succeeding at that, and we shouldn’t pretend otherwise. Here are some examples to illustrate the difference…

Bryan Cantrill, on Twitter:

How about a conference called “In Retrospect” in which presenters revisit talks they’ve given years prior — and describe how their thinking has evolved since?

Conor Friedersdorf on Eric Liu’s Better Arguments Project:

Be Open: “You cannot possibly change another person’s mind,” Liu said, “if you’re not willing to have your own mind changed. You may be able to rack up debater’s points. But you won’t change their mind if they sense you aren’t willing to have your mind changed. It’s a matter of mindset but also ‘heart-set.’”

Brian Resnick at Vox on intellectual humility:

Even when we overcome that immense challenge and figure out our errors, we need to remember we won’t necessarily be punished for saying, “I was wrong.” And we need to be braver about saying it.We need a culture that celebrates those words.

A challenge from David Leonhardt:

Pick an issue that you find complicated, and grapple with it.

Choose one on which you’re legitimately torn or harbor secret doubts. Read up on it. Don’t rush to explain away inconvenient evidence.

Then do something truly radical: Consider changing your mind, at least partially.

Agnes Callard on Pascal’s Wager and convincing yourself to believe something:

This argument has produced few converts, as Pascal would not have been surprised to learn. He knew that people cannot change their beliefs at will. We can’t muscle our mind into believing something we take to be false, not even when the upside is an eternity of happiness. Pascal’s solution is that you start by pretending to believe: attend church, speak the prayers, adopt religious habits. If you walk and talk like a believer, eventually you’ll come to think as one. He says, “This will naturally make you believe, and deaden your acuteness.”

Paul Krugman:

“what is remarkable is how small a role evidence has played in changing
minds. This is clear with respect to fiscal policy, where the strong association between austerity and economic contraction has made little dent in anti-Keynesian views. It’s even clearer with respect to monetary policy, as illustrated by a clever 2014 article in Bloomberg. The reporters decided to follow up on a famous 2010 open letter to Ben Bernanke, in which a number of well-known conservative economists and other public figures warned that quantitative easing would risk a “debased dollar” and inflation. Bloomberg asked signatories about what they had learned from the failure of inflation to materialize; not one was willing to admit they were wrong.”

What have you been wrong about? And what are you doing to improve your thinking process to ensure you’re willing to change your mind?

Update: I’ll ad more links as I can…

‘Change My View’ Reddit Community Launches Its Own Website

My writing process

Forgive the navel gazing, although that’s arguably the beauty of a blog no one really reads…

A colleague of mine, discussing which section of the magazine would be most appropriate for an article I’m writing, described my writing process succinctly and, I think, accurately:

You tend to arrive at judgments as you write, to check those judgments with experts, and to share them with readers.

I liked the description, but found myself wondering about the distinction I posted recently between the “textbook” voice and the “nuanced advocate”. Should I be more consciously pushing my writing into one or the other category? My colleague’s description seems to put my method somewhere in the middle.

Then I remembered a post I’d written in 2010, responding to writer Jim Henley’s description of the “blog-reporter” ethos. His aim was to distinguish bloggers who also do reporting from both straight news reporters and bloggers who simply opine. The blog-reporter ethos was:

* original reporting on first-hand sources
* a frankly stated point-of-view
* tempered by a scrupulous concern for fact
* an effort to include a fair account of differing perspectives
* ending in a willingness to plainly state conclusions about the subject

And, Henley continued:

I submit that this is just magazine-journalism ethos with the addition of cat pictures. If you think about what good long and short-form journalism looks like at a decent magazine, it looks like the bullet-points above

In my post, I went to quote Andy Revkin, then an environmental reporter at The New York Times, describing his blog Dot Earth:

I’ve spent a quarter century doing “conventional” journalism, and sought to create Dot Earth as an unconventional blog. It is not a spigot for my opinion. It is instead a journey that you’re invited to take with me… Lately, I’ve been describing the kind of inquiry I do on Dot Earth as providing a service akin to that of a mountain guide after an avalanche. Follow me and I can guarantee an honest search for a safe path.

My approach, as my colleague accurately describes it, isn’t exactly what either Revkin did or what Henley had in mind. But I see a lot of similarity. I see what I do as a very specific kind of “reporting”: reporting on ideas. And my intention is not only to relay facts from that reporting but to combine the methods of explanatory journalism with the expertise of researchers and academics in order to, I hope, answer important questions as best I can.

Why do paywalls work?

When The New York Times launched its paywall in 2011, I called it unsustainable. The paywall was porous; specifically, it didn’t meter visits coming from social media. That meant, I argued, that it would exempt more and more people over time and therefore that it wouldn’t work.

A lot has changed since then, and I think it’s only fair to admit that I was wrong. The Times has more than 3 million digital-only subscribers, for a print+digital total of 4 million. The Washington Post has 1.5 million digital subscribers. I’m lucky to work for a publication that is also thriving thanks to its focus on subscriptions. As Ken Doctor wrote in his 2018 year-end roundup at Nieman Lab, “the reader revenue revolution is real.”

So how did I miss it?

For one thing, my piece criticizing the Times paywall focused heavily on its planned social-media exemption. That exemption is gone. The Times paywall is less porous and much smarter today than it was in 2011, and that partly accounts for its success.

But that doesn’t really explain anything. I wrote that The Times’ assumption that “restricting access [from] blogs and social media would be counterproductive” was “quite true.”

The reason I thought that this assumption made sense helps to explain how my vision for the future of media diverged from the version we got, and offers a bit of a puzzle going forward.

The reason you had to give access to your articles for free to visitors coming from social or blogs was simply that in both cases the biggest problem users faced was that there was too much to pay attention to. If you cut someone off, they’d happily resume browsing and quickly settle on something else that was free and nearly as good. If that sounds off, or like I’m devaluing professional journalism, recall that 2011 was just 2 years after Clay Shirky published Here Comes Everybody. The idea that there was an endless supply of great stuff published by smart people for free was starting to catch on. Everyone was an expert on something (or enough were), and everyone had access to a printing press. All you had to do was filter out the good stuff, and that’s what social media was starting to do.

I summed up this attitude in one of my first posts for this blog:

There are many reasons why consumers resist paying for content.  One of those is the reality that you don’t need to do so in order to be well-informed.**

The truth is that when the NYT starts charging for content I can go to the next best free paper.  If all the papers started charging, I could read only blogs.  These blogs can freely quote the lede and important ‘grafs from news stories on their way to offering commentary.  So I wouldn’t really lose much, if anything, by not reading the Times.

Why didn’t it turn out that way?

I’m honestly still not totally sure, but here are a few possibilities:

  1. The digital ad market collapsed. I’ll return to this in a second, but will note for now that one popular explanation for this is the Facebook/Google “duopoly.” Another is simply that an endless supply of content drove prices down.
  2. Filtering “the good stuff” turned out to be harder than I thought. Our filters have gotten really good in some ways, but remain very poor in others. Social media is still doing a poor job of filtering out harassment, hate speech, etc. And the sophistication of disinformation efforts proved greater than what many people thought about even a few years ago (certainly including me).
  3. Social media just didn’t turn out that well. My thoughts on this are here and here.
  4. Brands just matter a lot more than I thought. Probably true.
  5. People cared more about the kinds of content only journalists can produce than I anticipated — and would pay for it. From early on, I was clear about the fact that certain kinds of content are not easily replicable by free competitors. In particular, beat reporting and investigative reporting are expensive, and not many other organizations outside of journalism are set up to do them. But investigative reporting was relatively rare, and more to the point it seemed unclear whether people cared much about the important stuff. It seemed like the serious, expensive reporting was being subsidized by the other stuff. (I wrote about this here and here.) So, basically journalism had a competitive advantage in the stuff people mostly didn’t care about. And the stuff people did pay for — the features, the profiles, etc. — was suddenly forced to compete against a mountain of free content. But maybe I just underestimated people. Or, at least I underestimated a few million people? And perhaps the Trump era has simply caused more people to care about this stuff, and therefore to be more willing to pay.

Again, I was clearly wrong. But I do still think that many people in media underestimate the significance of the mountain of free content, and how it’s affecting their businesses. Clearly, the ad duopoly has affected publishers. But what if the digital ad market were more splintered? The fact remains that audiences are easier to find, and there’s more content to advertise against. There’s no universe I can think of where digital ads wouldn’t have gotten a whole lot cheaper, simply because of additional supply of content and superior targeting technology.

This has had a rather strange effect on the journalism market. Let’s drastically oversimplify go hypothetical for a second and bucket content into three groups. The first is the highest quality, the most expensive to produce, and can therefore sustain subscription businesses — at least potentially. (Think The Times, The New Yorker, etc.) The second one is cheaper to produce, but still requires journalists and has a harder time getting anyone to pay. (I’m thinking of more aggregation-heavy operations, but insert your least-favorite ad-driven digital media company here.) The third bucket is essentially user-generated content, most of which now lives on a few big social-ish platforms, like Facebook, Instagram, Twitter, and YouTube. Forget the stuff that buckets 1 and 2 publish on these platforms. The content here requires no journalists to produce, and is entirely ad-driven, thanks to massive scale.

All of these content providers are competing for a supply of attention that is in some sense limited. I figured that Bucket 2 — the digital, ad-driven, lower-cost outlets — would be in a better position in the market than Bucket 1 — higher-cost, premium outlets — because of both a better cost structure and a commitment to free content. The endless supply of content from Bucket 3 would make paying unattractive, and in that world Bucket 2 would usurp Bucket 1.

So here’s where we get to the strange effect of the tanking digital ad market. Instead, Bucket 1’s worse cost structure had a counterintuitive, positive effect. These publishers realized earlier on how hard it would be to become sustainable on ads alone, and so pivoted to reader revenue sooner. When the ad market tanked, Bucket 2 was suddenly in trouble. And Bucket 2 failing is basically the best thing that can happen to Bucket 1. I said before that readers won’t pay because of all that content, most of it from Bucket 3. But that’s not quite right; the free competitors in Bucket 2 were a big part of the story. The idea was that if you don’t want to read Bloomberg, there’s always Business Insider. If BI needs to charge — or if it were to disappear — Bloomberg’s position is strengthened.

OK, I’ve spent 1,000 words going in a big circle. I missed that people would pay for news because… I missed that people were willing to pay?

My point, though, is that at least part of why I was wrong was that I didn’t predict that the content supply shock would hit the ad side of the market earlier and more dramatically than the user side. The endless supply of ad spots lowered digital ad rates, which is consistent with my whole point about endless amounts of free content. In the short-term, this strengthened subscription publishers by hurting their free competitors.

I realize that I’m at risk at this point of responding to a failed prediction by amending my failed model rather than changing my belief. That’s a typical sign of confirmation bias and bad forecasting. So it could well be that my whole theory of the content supply shock is just wrong all the way through.

Nonetheless, I would not be surprised if the content supply shock eventually catches up to the user side of the market. By that I mean that peoples’ willingness to pay will still be affected by how many alternatives there are, and though the alternatives to paying may not look great now, that could easily change.

Imagine, for instance, that digital media startups knew that the ad market would tank. Many of them might have combined their lower-cost approach not with a socially optimized strategy but with a focus on subscriptions. What might Vox.com look like if it had started from Day 1 with reader revenue in mind? In fact, platforms like Substack and Patreon are making it easier for digital-first publications to get reader revenue. So, while The Times and Bloomberg are getting a brief respite as their free digital competitors pivot to reader revenue or fail outright, they’ll soon face a new wave of competition from digital competitors set up to compete without so much focus on advertising.

Moreover, I expect we’ll see new and clever ways to aggregate amateur content and free, subsidized professional content (from think tanks, universities, etc.) that will create new Bucket 3 competitors and will attract many readers as a cheaper alternative compared to subscriptions to premium journalism.

In other words, the reader revenue revolution is real. But subscription businesses shouldn’t get complacent, because it may not be indefinite.

 

 

Lessons from the age of electricity

I’ve done a bunch of posts lately based on reading on the invention of the electric light and the spread of electricity. (I’ve linked to all those posts at the bottom of this one.) I thought it’d be worthwhile to list a few of the lessons I’ve drawn from that reading. Here goes:

  1. Culture matters. Culture affects innovation, both on the supply and the demand side. On the supply side, it seems likely that Americans’ self-conception as an inventive nation became a sort of self-fulfilling prophecy. On the demand side, Americans’ eagerness to adopt the electric light quite clearly contributed to the U.S. adopting it more rapidly than in Europe.
  2. Institutions matter and regulation is essential. The electric light was a profoundly positive invention, on net. But it also contributed to the exploitation of workers and caused frequent accidents. Interestingly, private insurers helped create industry standards to limit accidents. But regulation was essential to limiting the downsides of the new technology.
  3. Technology can exacerbate inequality. Cities got the electric light before rural areas, and the contributed to a growing divide between urban and rural America.
  4. It takes time to realize a technology’s full potential. A fun example of this is the battle to adopt lampshades. Who’d pay extra for light then put it under a shade? (See pages 267-268 of Age of Edison.) But a more serious example concerns the decades-long lag between the introduction of electric motors to manufacturing and the productivity boom that they enabled.

Here are the posts I put up on this subject over the past several weeks:

America’s adoption of the electric light.

America the inventive.

Who gets credit for America’s adoption of electricity?

Electricity, the New Deal, and America’s urban-rural divide.

The electric lag.

Labor and the electric light.