This past June, a little-known video news outlet called SourceFed published a story that seemed, at first, like a bombshell.
In a slickly produced video released on YouTube and Facebook, SourceFed alleged that Google was suppressing negative search suggestions about Hillary Clinton. The evidence: If you typed "Hillary Clinton ind" into Yahoo or Bing, those search engines suggested a search for “Hillary Clinton indictment.” Google suggested “Hillary Clinton Indiana.”
To tie the theory together, SourceFed's Matt Lieberman — a YouTube host pretending to be a journalist — pointed to connections between Eric Schmidt, who serves as executive chairman of Google’s parent company, Alphabet, and the Clinton campaign.
The video racked up some 26 million views on Facebook and another million on YouTube. It ended up on Breitbart and The Daily Caller. Donald Trump told Business Insider that if the allegations were true, "It is a disgrace that Google would do that."
It soon became clear that SourceFed’s story was demonstrably incorrect. Google explained that its autocomplete algorithm "will not show a predicted query that is offensive or disparaging when displayed in conjunction with a person's name" no matter who it is — which Lieberman would have noticed if he had checked the autocompletes for "Bernie Madoff cri” or “Ted Kaczynski cri."
“This is perhaps the biggest story we’ve ever reported.”
Predicted queries are the autocomplete terms that drop down in the search bar based on what a user has already typed and what other users have searched for in the past. Google doesn't suppress negative search results associated with a person's name, but it does suppress negative terms in predicted. Without this filtering, Google would open itself up to libel lawsuits and even more accusations of bias.
The presidential election prompted a serious discussion on fake news — deliberately inflated or falsified stories designed for maximum shareability, sometimes by content farms in Macedonia or an enterprising suburban dad in LA, which tended to favor Trump.
Corrections haven’t gotten as much attention, however. Corrections do not spread as fast or as far as the initial report, as the Columbia Journalism Review has documented. Corrections just aren't as viral, unless they're hilarious or sent to The New York Times Vows column.
SourceFed, an LA-based outlet owned by the digital video arm of Discovery Communications, must have felt some pressure to correct the record. It quickly uploaded a second clip in which Lieberman read Google's statement and called it "a very reasonable response." SourceFed also added the statement to the YouTube description of its first video — though it left the original, erroneous video intact and did not admit fault. The video with Google's response got 166,300 views, less than .6 percent of the original.
Lieberman also tried to deflect some criticism by saying SourceFed is a comedy site that also tackles current events. He had a tautological defense: "We made a video that posed a question that we felt needed asking," he said, "and the discussion that that video is generating shows that people are interested in this."
Unlike the Macedonian content farms or the LA dad, it doesn’t seem like SourceFed fabricated the Google story, exactly — it just did some bad reporting and turned it into an easy-to-digest video. Lieberman promised to release a more formal follow-up the following week. The follow-up video never materialized. Lieberman and SourceFed did not respond when asked what happened to it.
Corrections offer a particular challenge in the age of social media, said Melissa Zimdars, an assistant professor of communication at Merrimack College who created a popular guide to sussing out fake news. She pointed to another incorrect story that made a splash recently, this one claiming that CNN inadvertently broadcast a half hour of pornography instead of an Anthony Bourdain show. Zimdars noticed that after the mistake was exposed and publications updated their original stories, the incorrect headlines often remained on the outlets’ original social media posts without acknowledging the error, allowing the misinformation to keep spreading.
"While we think of how we can update information after it goes viral, we need to think about other ethical decisions that news organizations make in whether to correct the information on social media posts after substantially altering stories," Zimdars said.
Since the Google video, SourceFed has posted scores of new videos: about Ken Bone, LaCroix seltzer and even, remarkably, efforts to fight fake news. Not one has accumulated as many views as its discredited Google story, which is still available in its original form on its Facebook and YouTube pages. The Facebook description still reads: "Did Google manipulate search for Hillary? This is perhaps the biggest story we've ever reported."
And on the web, many derivative stories — "Here Are 10 More Examples of Google Search Results Favorable to Hillary," "Google accused of manipulating searches, burying negative stories about Hillary Clinton" — still persist, sans corrections.
Fake news, of course, long predates this election season. (Remember the too-good-to-be-true tale of the 13-year-old boy who was said to have used his father's credit card to hire expensive hookers to play Halo with him?) But in an age when vast swaths of journalism amount to little more than aggregation, the phenomenon of credible news outlets picking up juicy yet unverified stories from the fringes of the media probably should have raised the alarm long before this past election season.
Jon Stewart, speaking in New York City last week, didn’t pull any punches on the subject.
"What the media has become is an information-laundering system," he said.