Truth and Trust in the Age of Algorithms

A great deal has been written in the last few days about how Facebook determined which stories appeared in its “Trending” feature. The controversy began when Gizmodo published a story claiming to reveal an anti-conservative bias among the sites “news curators”:

Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project. This individual says that workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users.

Several former Facebook “news curators,” as they were known internally, also told Gizmodo that they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all. The former curators, all of whom worked as contractors, also said they were directed not to include news about Facebook itself in the trending module.

Naturally, the story generated not a little consternation among conservatives. Indeed, a Republican senator, John Thune, was quick to call for a congressional investigation.

Subsequently, leaked documents revealed that Facebook’s “Trending” feature was heavily curated by human editors:

[…] the documents show that the company relies heavily on the intervention of a small editorial team to determine what makes its “trending module” headlines – the list of news topics that shows up on the side of the browser window on Facebook’s desktop version. The company backed away from a pure-algorithm approach in 2014 after criticism that it had not included enough coverage of unrest in Ferguson, Missouri, in users’ feeds.

The guidelines show human intervention – and therefore editorial decisions – at almost every stage of Facebook’s trending news operation […]”

The whole affair is not inconsequential because Facebook is visited by over one billion people daily and is now widely regarded as “the biggest news distributor on the planet.” In her running commentary on Twitter, Zeynep Tufikci wrote, “My criticism is this: Facebook is now among the world’s most important gatekeepers, and it has to own that role. It’s not an afterthought.”

Along with the irritation expressed by conservatives, others have criticized Facebook for presenting its Trending stories as the products of a neutral and impersonal computational process. As the Guardian noted:

“The topics you see are based on a number of factors including engagement, timeliness, Pages you’ve liked and your location,” says a page devoted to the question “How does Facebook determine what topics are trending?”

No mention there of the human curators, and this brings us closer to what may be the critical issue: our expectations of algorithms.

First, we should note that the word algorithm is itself part of the problem. In his thoughtful discussion of the Facebook story, Navneet Alang called the algorithm the “organizing principle” of our age. For this reason, we ought to be careful in our use of the term; it does both too much and too little. As Tufekci tweeted, “I *do* wish there were a better term than algorithm to mean ‘complex and opaque computation of consequence’. Language does what it does.”

Secondly, Rob Horning is almost certainly right in claiming that “Facebook is invested in the idea that truth depends on scale, and the size of their network gives them privileged access to the truth.” To borrow a phrase from Kate Crawford, Facebook wants to be the dominant force in a “data driven regime of ‘truth.'”

Thirdly, it is apparent that in its striving to be the dominant player in the “data driven regime of truth,” Facebook is answering a widely felt desire. “Because they are mathematical formulas,” Alang observed, “we often feel that algorithms are more objective than people.” “Facebook’s aim,” Alang added, “appears to have been to eventually replace its humans with smarter formulas.” 

We want to believe that Algorithms + Big Data = Truth. We have, in other words, displaced the old Enlightenment faith in neutral, objective Reason, which was to guide democratic deliberation in the public sphere, onto the “algorithms” that structure our digital public sphere.

False hopes and subsequent frustrations with “algorithms,” then, reveal underlying technocratic aspirations, the longing for technology to do the work of politics. A longing that may be understandable given the frustrating, difficult, and sometimes even dangerous work of doing politics, but a longing that is misguided nonetheless.

Our desire for neutral, truth-revealing algorithms can also be framed as a symptom of a crisis of trust. If we cannot trust people or institutions composed of people, perhaps we can trust impersonal computational processes. Not surprisingly, we feel badly used upon discovering that behind the curtain of these processes are only more people. But the sooner these false hopes and technocratic dreams are dispelled, the better.