I’ve written about the potential dangers of Google and Facebook using algorithms to recommend news, with the basic fear being that they’ll recommend stories that confirm my biases rather than “feed me my vegetables.” But Nieman Lab has an interview with the founder of Google News who has quite a different take on what he’s doing:
“Over time,” he replied, “I realized that there is value in the fact that individual editors have a point of view. If everybody tried to be super-objective, then you’d get a watered-down, bland discussion,” he notes. But “you actually benefit from the fact that each publication has its own style, it has its own point of view, and it can articulate a point of view very strongly.” Provided that perspective can be balanced with another — one that, basically, speaks for another audience — that kind of algorithmic objectivity allows for a more nuanced take on news stories than you’d get from individual editors trying, individually, to strike a balance. “You really want the most articulate and passionate people arguing both sides of the equation,” Bharat says. Then, technology can step in to smooth out the edges and locate consensus. ”From the synthesis of diametrically opposing points of view,” in other words, “you can get a better experience than requiring each of them to provide a completely balanced viewpoint.”“That is the opportunity that having an objective, algorithmic intermediary provides you,” Bharat says. “If you trust the algorithm to do a fair job and really share these viewpoints, then you can allow these viewpoints to be quite biased if they want to be.”
[emphasis from Nieman Lab]
A few thoughts:
1. It is very encouraging that Krishna Bharat is thinking about this, even if only as a piece of what he’s doing.
2. He’s right that whether or not you can trust the algorithm matters tremendously.
3. I remain skeptical that there’s any incentive for the algorithm to challenge me. Does he believe doing so will provide something I want and am more likely to click such that this vision fits nicely with Google’s bottom line? Or is he suggesting that he and his team are worried about more than the bottom line?
Bottom line: it’s great he’s thinking about this but he needs to explain why we should really believe it’s a priority if he wants us to truly trust the algorithm.