How will humans add value to algorithms?

Last month, I wrote a piece at HBR about how humans and algorithms will collaborate, based on the writings of Tyler Cowen and Andrew McAfee. The central tension was whether that collaboration would be set up such that algorithms were providing an input for humans to make decisions, or whether human reasoning would be the input for algorithms to make them.

One thing I thought about in the process of writing that piece but didn’t include was the question of whether one of these two models offered humans a higher value role. In other words, are you more likely to be highly compensated when you’re the decider adding value on top of an algorithm, or when you’re providing input to one?

I was initially leaning toward the former, but I wasn’t sure and so didn’t raise the question in the post. But the more I think about it, the more it seems to me that there will be opportunities for highly paid and poorly paid (or even unpaid) contributions in both cases.

Here’s a quick outline of what I’m thinking:

Screenshot 2014-01-26 at 3.26.26 PMIt seems totally possible for the post-algorithm “decider” to be an extremely low level, poorly paid contribution. I’m imagining someone whose job is basically just to review algorithmic decisions and make sure nothing is totally out of whack. Think of someone on an assembly line responsible for quality control who pulls the cord if something looks amiss. Just because this position in the algorithmic example is closer to the final decision point doesn’t mean it will be high value or well paid.

Likewise, it’s totally possible to imagine pre-algorithm positions that are high value. Given that the aggregation of expert opinion can often produce a better prediction than any expert on his or her own, you can easily imagine these expert-as-algorithmic-input positions as being relatively high value.

Still, the onus is on the experts to truly prove useful in this scenario. Because if they’re not adding discernible value, waiting in the wings is the possibility for the algorithm to aggregate an even greater range of human judgment — say via social media — that could be done cheaply or even for free.

I’m not sure where this leaves us except to say that I don’t see much reason for us to be “rooting” for algorithms to be inputs to humans or vice versa. In all likelihood this is not the right question. The relevant question, and a harder one, is simply how do we apply human judgment in a way that enhances our increasingly impressive computational decision-making powers.


Leave a comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s