Randomization is good

Researchers used LinkedIn to study the economic power of “weak ties.” It’s a fascinating topic and you can read about the study here. The New York Times reported on the study with the headline “LinkedIn Ran Social Experiments on 20 Million Users Over Five Years.” To put it charitably, that’s a very weird reaction.

Stepping back for a second… People are rightly worried about the power large online platforms have to influence their behavior. That topic can head down a philosophical rabbit hole, as it invites questions about agency and free will. But it’s a perfectly reasonable thing to worry about: Code is a form of governance, and we ought to question the ways companies govern our online spaces.

But that’s happening all the time! Once you become concerned about companies influencing behavior you’ll see it all around you. There’s nothing special about an A/B test in this regard. Randomized experiences are just one way that influence might happen.

However, there’s something nefarious in the colloquial meaning of someone “experimenting on you.” That sounds especially bad. The thing is that the colloquial meaning doesn’t have to involve randomization: If Facebook just tweaks how the news feed works in order to get you to click more ads, that can be a scary form of influence–and meet the colloquial definition of “experimenting” on you–even if they don’t run a randomized control trial.

And then there’s the fact that in this case, the randomization is part of the sort of thing you might hope platforms would do: Run experiments in collaboration with researchers, who no doubt gave consideration to the ethical considerations involved, in order to create new knowledge.

Of all the ways platforms might influence us, this seems like one of the least nefarious. But the word “experiment” carries a weight that then can make this seem like a uniquely worrisome thing. Worry about the influence, not the academic experiments.

Side doors

“Well shit, we just need to blog more.”

The Verge has been redesigned. That’s from the editor-in-chief, Nilay Patel, who goes on to say:

When you embark on a project to totally reboot a giant site that makes a bunch of money, you inevitably get asked questions about conversion metrics and KPIs and other extremely boring vocabulary words. People will pop out of dark corners trying to start interminable conversations about “side doors,” and you will have to run away from them, screaming.

But there’s only one real goal here: The Verge should be fun to read, every time you open it. If we get that right, everything else will fall into place.

That stood out to me because I used to say my whole job was walking into rooms and explaining to people I worked with that the audience came in almost entirely through side doors. By that I meant mostly social media, and search engines, messaging apps, and emails you didn’t send. Almost no one came to your homepage, so if you wanted to attract readers you needed to understand side doors. This wasn’t novel on my part at all; it was conventional wisdom. But as a “digital” editor focused on social media and audience development I gave this pitch a lot.

And I ended by saying: You can’t change this. You can build the nicest homepage in the world but people still will mostly come through side doors.

But a lot of those side doors just turned out to be bad — for publishers but for readers and for public discourse, too. Algorithmic social media increasingly feels like a wrong turn. It’s not good for us.

So I’m glad to see The Verge trying to build its own feed and attract more readers directly. It probably was the case that, when I was talking about side doors in the mid 2010s, that most small- and medium-sized publishers couldn’t on their own do much to change those dynamics. It’s not usually a good idea to stand athwart audience habits yelling “Stop!”

But not every audience habit is a step in the right direction either, and even trends that seem inevitable can shift over a period of years.

Side doors are part of what makes the internet so great: You jump from one thing to the next, following a chain of links to places you might not have expected. But many of the side doors we’ve actually built are bad. I hope publishers like The Verge can convince more readers to try the front door for a change.

Some intellectual influences

I’ve been thinking about the perspectives and schools of thought that I came to in my formative years that, for better or worse, have shaped how I think about a wide range of things. I thought it’d be useful to sketch those out, if only for myself. They’re quite different from each other — some are schools of thought or intellectual subfields, some are hazy ideas that resist easy definition.

Two are subfields of economics: behavioral economics and the economics of ideas. One is a philosophical tradition: American pragmatism. And two are related to the internet: what I’m calling the “Berkman perspective,” a set of ideas from internet scholars in and around the Berkman-Klein Center; and the “wonkosphere,” the policy blogging world that thrived during the Obama years, of which I was a voracious reader.

Behavioral economics

I’ve been deeply influenced by both the popular writing and research on cognitive bias, decision making, and forecasting. In a nutshell, I take this work to suggest:

The economics of ideas

I’ve always been drawn to the topic of innovation and how it happens and have enjoyed a number of great books on this topic. In 2015 I had the privilege to audit a PhD course at MIT Sloan on the economics of ideas which helped me go deeper on this subject. In terms of what I’ve come to take away from this field:

  • New ideas are the driver of long-run economic growth, so few questions are as important as the question of how ideas are produced, aka how innovation happens
  • Ideas are an unusual kind of economic good because they’re nonrival. Some of the big assumptions that drive how we’d normally think about markets don’t apply
  • Macro-theoretical models of ideas-driven growth are useful but the economics of ideas is and should be a deeply empirical field: We should look at incentives and markets and property rights but also culture and institutions to understand how innovation happens

The Berkman view

This one was the hardest to name, but I remain deeply influenced by a set of scholars and writers who studied the internet in the early 2000s and 2010s. Many, though certainly not all, were associated at some point with the Berkman Klein Center for Internet & Society at Harvard. And I’m proud to have spent part of a semester studying AI ethics and governance there as an Assembly Fellow in 2019. If I were to sum up what I took from a range of thinkers, it’s this:

  • The internet was a big deal and worth studying. This seems obvious now, but it wasn’t obvious to many people even 15 years ago.
  • “Code is law,” meaning software can shape behavior. It can be a form of governance, limiting what we can do or pushing us toward a particular decision.
  • The internet could enable new and potentially better forms of cooperation and communication.

The “wonkosphere”

It’s hard to overstate how much I was influenced by the policy and economics blogging world of the late 2000s and early 2010s. In terms of what I took from it:

  • Argument produces knowledge. The back-and-forth within this world wasn’t just fun to read, it showed a way to think and debate and learn (not always good of course!)
  • The internet is an amazing tool for research. Reading the wonkosphere convinced me that a good faith reading of the massive quantity of high-quality information available online often produces better journalism than so-called “neutral” reporting
  • There is a technocratic approach to politics that, for better or worse, musters evidence and arguments about policy questions and is skeptical of overarching ideology as a way of choosing between policies

Pragmatism

This last one I’ll just turn over to the intro from the Stanford Encyclopedia of Philosophy:

Pragmatism is a philosophical tradition that – very broadly – understands knowing the world as inseparable from agency within it. This general idea has attracted a remarkably rich and at times contrary range of interpretations, including: that all philosophical concepts should be tested via scientific experimentation, that a claim is true if and only if it is useful (relatedly: if a philosophical theory does not contribute directly to social progress then it is not worth much), that experience consists in transacting with rather than representing nature, that articulate language rests on a deep bed of shared human practices that can never be fully ‘made explicit’.

Behavioral economics in one chart

It’s sometimes claimed, not entirely unreasonably, that the research on cognitive biases amounts to an unwieldy laundry list. Just look at how long the list of cognitive biases is on Wikipedia. This frustration is usually paired with some other criticism of the field: that results don’t replicate, or that there’s no underlying theory, or that it’s a mistake to benchmark the human brain against the so-called “rational ideal.”

I’m not very moved by these critiques; if the list of biases is long, it’s partly because the psychology of heuristics and biases is appropriately a very empirical field. An overarching theory would be nice, but only if it can explain the facts. The whole point of behavioral economics was to correct the fact that economists had let theory wander away from reality.

Still, I’ve been thinking recently about how to sum up the key findings of behavioral science. What’s the shortest possible summary of what we know about bias and decision making?

Enter Decision Leadership by Don Moore and Max Bazerman. They wrote the textbook on decision making, and in this book they offer advice on how to make good decisions across an organization. I’ve had the pleasure to work with them and I recommend the book.

What I want to share here isn’t their advice, but their succinct summary of decision biases, from the book’s appendix. It’s the best synthesis of the field I know of:

p. 196

Here’s a bit more.

The human mind, for all its miraculous powers, is not perfect. We do impressively well navigating a complex world but nevertheless fall short of the rational ideal. This is no surprise–perfect rationality assumes we have infinite cognitive processing capacity and complete preferences. Lacking these, we adapt by using shortcuts or simplifying heuristics to manage challenges that threaten to exceed our cognitive limitations. These heuristics serve us well much of the time, but they can lead to predictable errors. We group these errors into four types based on the heuristics that give rise to them.

p. 195

The availability heuristic

The first is the availability heuristic, which serves as an efficient way of dealing with our lack of omniscience. Since no one knows everything, we rely instead on the information that is available to us… Reliance on the availability heuristic gives recent and memorable events outsize influence in your likelihood judgments. After experiencing a dramatic event such as a burglary, a wildfire, a hurricane, or an earthquake, your interest in purchasing insurance to protect yourself is likely to go up… The availability heuristic leads us to exaggerate vivid, dramatic, or memorable risks–those we can easily retrieve from memory… The availability heuristic biases all of us toward overreliance on what we know or the data we have on hand. Sometimes information is easier to recall because it is emotionally vivid, but other information is privileged simply due to the quirks of human memory… You can reduce your vulnerability to the availability bias by asking yourself what information you would like to have in order to make a fully informed decision–and then go seek it out.

p. 195-198

The confirmation heuristic

The second is the confirmation heuristic, which simplifies the process by which we gather new information… One of the challenges that impedes us in seeking out the most useful evidence to inform our decisions is that confirmation is so much more natural than disconfirmation… Even our own brains are better at confirmation than disconfirmation. We are, as a rule, better at identifying the presence than the absence of something. Identifying who is missing from a group is more difficult than determining who is present. That has the dangerous consequence of making it easier to find evidence for whatever we’re looking for… Confirmation can bias our thought processes even when we are motivated to be accurate. It is even more powerful when ti serves a motivation to believe that we are good or virtuous or right. Virtue and sanctimony can align when we defend our group and its belief systems. The motivation to believe that our friends, leaders, and teachers are right can make it difficult to hear evidence that questions them… The automatic tendency to think first of information that confirms your expectations will make it easy for you to jump to conclusions. It will make it easy for you to become overconfident, too sure that the evidence supports your beliefs… If you want to make better decisions, remind yourself to ask what your critics or opponents would say about the same issue.

p. 195, 199-202

The representativeness heuristic

The third is the representativeness heuristic, which stands in for full understanding of cause and effect relationships. We make assumptions about what causes what, relying on the similarity between effects and their putative causes…

p. 195

This is a tricky one so I want to step outside the book for a second and supplement the definition above with the definition from the American Psychological Association:

representativeness heuristic: a strategy for making categorical judgments about a given person or target based on how closely the exemplar matches the typical or average member of the category. For example, given a choice of the two categories poet and accountant, people are likely to assign a person in unconventional clothes reading a poetry book to the former category; however, the much greater frequency of accountants in the population means that such a person is more likely to be an accountant.

The framing heuristic

Finally, framing helps us establish preferences, deciding how good or bad something is by comparing it to alternatives… Context frames our preferences in important ways. Frames drive our choices in ways that rational theory would not predict. We routinely behave as if we are risk averse when we consider choices about gains but flip to being risk seeping when we think about the same prospect as a loss. This reversal of risk preferences owes itself to the fact that we think about gains and losses relative to a reference point–usually the status quo or, in the case of investments, the purchase price… One [consequence] is the so-called endowment effect, our attachment ot the stuff we happen to have… The endowment effect can contribute to the status quo bias, which leads us to be irrationally attached to the existing endowment of possessions, privileges, and practices.

p. 195, 205-209

Summing it up

So, what’s the shortest version of behavioral economics and decision making? We are not perfectly rational and in particular we’re intuitively quite bad at statistical and probabilistic thinking. We rely heavily on the information we can easily recall, and we look for reasons to confirm what we already think–especially when doing so protects our self-conception or our group’s status. When we think about cause and effect, we rely on the information that we can easily recall, we don’t think carefully about probability and counterfactuals–and instead, we think in terms of stories and archetypes. We put things into categories and construct causal narratives based on the category. And we are creatures of context: Our frame of reference can sometimes shift based on what is made salient to us, and we are often especially attached to the status quo.

There are a lot of narrower biases within these four heuristics, and no doubt there’s plenty to quibble with in any specific taxonomy like this one. But in my book that’s a pretty decent starting point for summing up a wide set of empirical work, and it clearly helps explain a lot of how we think and how we decide.

A causal question

A good tweet:

Of course, putting this question to good use requires judgment. There are no iron rules for mapping a causal claim to a prediction about large-scale data in the messy real world. And there are always a million ways you can explain why the causal claim is still true even if the predicted real-world effect doesn’t turn up. But it’s still a great question. The point of causal analysis is, ultimately, to make claims about the wider world and how it behaves, not just to predict the outcomes of RCTs.

Forecasting update

In February, I recapped my track record as a forecaster, going back to 2015. I’m a bit more than halfway through my first season as a “Pro” on the INFER forecasting platform so I thought I’d post an update.

  • 64 questions have resolved. I’ve forecast on 7 of them.
  • Of users who’ve forecast on at least five resolved questions this season (the platform’s default leaderboard cutoff) I’m ranked 66 out of 280 (76th percentile). My percentile is basically the same if you include anyone who’s forecast on at least one resolved question.
  • For the all-time leaderboard, among anyone with five resolved questions since 2020, I’m currently 73 out of 558, (87th percentile).
  • My best performance year to date was on a forecast of venture capital. My worst on Intel’s Q2 revenue.

Prediction and resilience

In Radical Uncertainty, the authors make a common argument: Instead of trying to predict the future, you should prepare for a wide range of scenarios, including ones you can’t even fully articulate. You should become more robust or resilient:

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingincies, few of which can be described in any but the sketchiest of detail.

This might make sense as an argument against certain types of forecasting exercises that organizations undertake. But it seems to me to still clearly require accurate prediction.

They’re essentially shifting the necessary prediction from the likelihood of a specific future outcome (will there be a recession, for example) to a prediction of the causal effect of a certain action across a range of unspecified outcomes.

For example, one way to be more resilient as a company is to hold more cash. Cash keeps your options open, so instead of trying to predict exactly where the economy is going, hold more cash.

But does that get you away from predictions and forecasting?

Not as far as I can tell. You’re basically predicting that, all else equal, your firm will do better (by whatever metrics) if you increase your cash holdings than if you don’t. You’re making a prediction about a causal effect, rather than about a specific external scenario. Prediction, though, is still a key part of the enterprise.

In their book Prediction Machines, Agarwal, Gans, and Goldfarb define prediction as “using information you have to generate information you don’t have.” They’re using the word more broadly than just predicting the future, but that definition has value even if we add “about the future” to the end of it.

The art of decision making isn’t about trying to predict everything that might have an influence on you. It’s that you use the information you have to generate information you don’t have; you make the predictions that seem most useful, that have the highest payoff.

Sometimes that is straightforward forecasting of a scenario: Will there be a recession next year? Sometimes it’s predicting the effect of a choice you’re considering, ahead of time: Will we be more likely to succeed if we do X to improve our cash holdings?

The difference isn’t that one of these involves prediction and the other says prediction is impossible. Both are bets about the future. The difference is in their potential payoff. Does the information you have give you any purchase on the question? Maybe your information suggests that recession forecasting is borderline impossible but there’s good evidence on the causal effects of increasing cash holdings. You’re likely to make a better prediction in the latter case.

Use the information you have to generate information you don’t have. That takes time and effort — to collect information, to analyze it, to apply it to the decision at hand. The question when you’re considering the value of a forecast isn’t Can I avoid making predictions here? It’s Is the effort I’ll put into this prediction the best use of my time given what I’m trying to achieve?

The German labor market

A paper by an MIT economist explains its unique features…

Germany has less low-wage work and less inequality than the US, but more flexibility and lower unemployment than France.

Germany—the world’s fourth largest economy—has remained partially insulated from the growing labor market challenges faced by the United States and other high-income countries. In many advanced economies, the past few decades have seen sustained increases in earnings
inequality, a fall in the labor share, the disappearance of “good jobs” in manufacturing, the rise of precarious work, and a deterioration in the power of organized labor and individual workers. These developments threaten to prevent economic growth from translating into shared prosperity. Figure 1 shows that compared to the United States, German organized labor has remained strong. Half of German workers are covered by a collective bargaining agreement, compared to 6.1 percent of private-sector Americans (BLS, 2022). Trust in unions is almost twice as high in Germany compared to the US. Employees in Germany work fewer hours, the country’s
low-wage sector is 25 percent smaller, and labor’s share of national income is higher. The German manufacturing sector still makes up almost a quarter of GDP (compared to 12 percent in the US). Germany has one of the highest robot penetration rates in the world (IFR, 2017)—yet in contrast to the US (Acemoglu and Restrepo, 2020), robotization has not led to net employment declines in Germany, especially in areas with high union strength
(Dauth, Findeisen, Suedekum, and Woessner, 2021). At the same time, relative to other OECD countries—many of which, like France or Italy, have maintained even higher collective bargaining coverage through more rigid bargaining systems—the German labor market features low unemployment and high labor force participation (though also a larger low-wage sector).

Bargaining is mostly (but not entirely) at the sectoral level rather than the firm level.

The first pillar is the sectoral bargaining system. In Germany, unions and employer associations engage in bargaining at the industry-region level, leading to broader coverage than in the US. Meanwhile, partial decentralization of bargaining to the firm level—through flexibility provisions in sectoral agreements, or direct negotiations between individual firms and sectoral unions—gives firms space to adapt to changing circumstances. However, this flexibility has also resulted in a gradual erosion of bargaining coverage.

Workers have multiple channels to share their perspectives in firms’ decision making.

The second pillar of the German model is firm-level codetermination. Workers are integrated into corporate decision-making through membership on company boards and the formation of “works councils,” leading to ongoing cooperative dialogue between shareholders,
managers, and workers. Overall, the German model combines centralized “social partnership” between unions and employer associations at the industry-region level with decentralized mechanisms for local wage-setting, dialogue, and customization of employment conditions.

There’s a tradeoff between flexibility and collective bargaining. Germany’s balance between them has evolved over time.

A recurrent theme in our discussion of the German model will be a tension at the heart of the model: between firms’ flexibility and workers’ collective bargaining strength. Since the 1990s, the model has become more decentralized and flexible. This evolution has arguably contributed to reductions in unemployment and increases in economic growth, but has
entailed a substantial erosion of collective bargaining and works council coverage (as Figure 2 illustrates) and a weakening of bargaining agreements. This erosion may explain Germany’s slowly increasing—and perhaps underappreciated—exposure to the afflictions suffered by
other developed-world labor markets: rising wage inequality and the spread of low-wage, precarious jobs.

Notes & quotes: ‘Radical Uncertainty’

I recently read Radical Uncertainty by the economists John Kay and Mervyn King. A few notes, then a bunch of block quotes that stood out to me…

Notes

  • I strongly disagree in practice with their argument against probabilistic reasoning. Only economists who’ve spent time in finance and business schools could possibly think that probability and expected value-based thinking were overvalued; in practice they seem far undervalued. Kay and King tell the story of Obama’s advisors telling him numerically what they think the chances are that Osama bin Laden is in the house — a scene Phil Tetlock describes in his book as a model case of probabilistic reasoning. Kay and King think this is useless and actively damaging: The analysts are using numbers to hide that they just don’t know. I think Tetlock has it right here, and that summarizes how I felt about most of the book.
  • That said, Kay and King’s basic point that sometimes it’s pointless to put a probability on something and we should just admit “I have no idea” — that seems right. What will US GDP be in the year 5,000? I’m not sure it’s helpful to try and put numbers and confidence intervals to that sort of question.
  • They also stick up for the art of reasoning to the best explanation (abductive reasoning), and they frequently come back to the question, borrowed from a business professor: “What is going on here?” Again, overall I’m mostly skeptical. The evidence seems to suggest this is an overvalued starting point — we’re more likely to zoom too far in than too far out, which is why it’s often wise to step back, look for data and take the “outside view.” But it’s also possible to go too far in that direction and pay too little attention to what’s unique about a single case (I’ve done it plenty). And they’re right that explaining individual cases requires judgment. Sometimes broader data is nonexistent; sometimes conditions are such that broader comparison sets aren’t useful; sometimes diving into the details is what’s required to truly understand a topic. “What is going on here?” is a good animating question.
  • Their dismissal of behavioral economics was unpersuasive to me, but the discussion of narratives in decision making was intriguing. They argue that people craft “reference narratives” about how they hope or expect their lives to go, and then they make decisions so as to bring reality as closely into line with the narrative as possible. I was left wanting more on this subject.

Quotes

Plato sought and found truth in logic; for him there ws a sharp distinction between truth, which was axiomatic, and probability, which was merely the opinion of man. In premodernt hought there was no such thing as randomness, since the course of events reflected the will of the gods, which was determinate if not fully known. The means of resolving uncertainty was not to be found in mathematics, but in a better appreciation of the will of the gods.

p. 54

At the end of the nineteenth century, Charles Sanders Peirce, a founder of the American school of pragmatist philosophy, distinguished three broad styles of reasoning. Deductive reasoning reaches logical conclusions from stated premises… Inductive reasoning … seeks to generalise from observations, and may be supported or refuted by subsequent experience… Abductive reasoning seeks to provide the best explanation of a unique event… Deductive, inductive, and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

p. 137-138

Kahneman offers an explanation of why earlier and inadequate theories of choice persisted for so long — a ‘theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws’. We might say the same about behavioural economics. We believe that it is time to move beyond judgmental taxonomies of ‘biases’ derived from a benchmark which is a normative model of human behaviour deduced from implausible a priori principles. And ask instead how humans do behave in large worlds of which they can only ever have imperfect knowledge.

p. 147-148

In many colleges, students of law are taught to follow a structure described as IRAC: issue, rule, analysis, conclusion. The impressive skill of a top lawyer is to identify the issue; to give structure to an array of amorphous facts, freqnetly presented in a tendentious manner — that is to establish ‘what is going on here.’ … IRAC is a useful acronym for anyone engaged in the search for practical knowledge. In the legal context it leads naturally to thetow next stages of effective practical reasoning — communication of narrative and challenge to the prevailing narrative.

p. 194-195

The legal style of reasoning, essentially abductive, involves a search for the ‘best explanation’ — a persuasive narrative account of events relevant to the case. The great jurist and US Supreme Court Justice Oliver Wendell Holmes Jr. began his exposition of legal philosphy with the observation that ‘The life of the law has not been logic; it has been experience… The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.’

p. 211

A ‘good’ explanation meets the twin criteria of credibility and coherence. It is consistent with (most of) the available evidence and the general knowledge available to judges and jurors… A good explanation demonstrates internal coherence such that, taken as a whole, the account of events makes sense. The best explanation can be distinguished from other explanations and is not compatible with these other explanations. Statistical reasoning has its place but only when integrated into an overall narrative or best explanation.

p. 212

In pressing the case for probabilistic reasoning, Philip Tetlock and Daniel Gardner, the appraisers of forecasting and architects of the ‘good judgment project’, argue that ‘For decades, the United States had a policy of maintaining the capacity to fight two wars simultaneously. But why not three? or four? Why not prepare for an alien invation while we are at it? The answer hinges on probabilities.’ No it doesn’t. There is no basis on which one can form probabilities of an invasion by aliens… The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingincies, few of which can be described in any but the sketchiest of detail.

p. 294-295

The mark of science is not insistence on deductive reasoning but insistence that observation trumps theory, whatever the purported authority supporting the theory.

p. 389

Acknowledging radical uncertainty does not mean that anything goes. Look to the future and contemplate the ways in which information technology will be deployed in the coming decades, or consider the ways in which the growth of prosperity and political influence in Asia will affect the geopolitical balanc.e They are all things about which we can know something, but not enough; we see though a glass, darkly. We can construct narratives and scenarios to describe the ways in which technology and global politics might develop in the next twenty years; but there is no sensible way in which we can refine such dialogue by attaching probabilities to a comprehensive list of contingiences. We might, however, tlalk coherently about the confidence we place in scenarois and the likelihood that they will arise. As we have ephasised, the words ‘confidence’, ‘likelihood’ and ‘probablitly’ are often used interchangeably but they have different meanings. We do not enhance our understanding of the future by inventing facts and figures to fill in the inescapable gaps in our knowledge. We cannot rely on forecasts in planning for the future…. We are not afraid to answer these questions with ‘we do not know.’

p. 403

My writing for Quartz

I recently left Quartz after ~2.5 rewarding years as an editor there. The most gratifying editorial aspect of that work was editing hundreds of interesting features from nearly every reporter on staff. There are too many of those pieces to try and select favorites. But I wrote a bit, too. And I wanted to link to a few of my favorite pieces I wrote during my time there:

I’ll always be grateful for my tenure there, working with some truly excellent people.