The Future

Is Google... evil?

Google still struggles to make its autocomplete feature not offend people on the regular.
The Future

Is Google... evil?

Google still struggles to make its autocomplete feature not offend people on the regular.

Type something into Google and you’ll be met with an ever-changing list of suggestions as the company’s many algorithms try their hardest to snatch the words right out of your mouth. Most of these predictions will be fine, some even eerily correct, but the others may be sexist, racist, or somewhere in between. Google has had a search prediction problem practically since the feature came online in 2008. Innocuous queries suddenly called into question by the search box’s seemingly unbiased nudges: “feminists are… sexist?” “Hitler is... my hero?”

Over the last two years Google has been embroiled in scandal after scandal thanks to the dubious nature of the company’s auto-predictions. In a Friday post on the Google blog, Google’s public search liaison announced that the company is actively trying to be less awful when it comes to its autogenerated suggestions. The post stated that it would be loosening up how it defines “inappropriate predictions” in an attempt to stop accidentally offending users.

One of the first truly viral incidents happened late 2016, when Guardian reporter Carole Cadwalladr made the mistake of typing seven seemingly simple letters into Google: “a-r-e j-e-w-s.” Instead of, you know, any normal autosuggestions, Google chucked back a chilling response: “are jews evil?”

“[Google] strives to give you the best, most relevant results,” wrote Cadwalladr. “And in this instance the third-best, most relevant result to the search query ‘are Jews… ‘ is a link to an article from stormfront.org, a neo-Nazi website. The fifth is a YouTube video: ‘Why the Jews are Evil. Why we are against them.’”

Cadwalladr found similar issues with the phrase “are women…”

Are women… Google’s search results

Are women… Google’s search results

Despite a 2017 era rollout of inappropriate prediction reporting tools, Google’s offenses persisted. A February 2018 investigation by Wired found that the phrases “islamists are,” "blacks are," "Hitler is,” "feminists are,” "white supremacy is," "black lives matter is," and "climate change is" all prompted disturbing autosuggestions.

Islamists are... Google's search results

Islamists are... Google's search results

Hitler is... Google's search results

Hitler is... Google's search results

This problem isn’t even remotely restricted to Google; it, has spread to Google property YouTube too.

Last November, dozens of users reported that when they typed the perfectly innocuous phrase “how to have” into the totally-not-problematic-at-all video sharing site, they were met with, well…

How to have... YouTube search results

How to have... YouTube search results

In a statement to BuzzFeed News, a YouTube spokesperson said: "Earlier today our teams were alerted to this awful autocomplete result and we worked to quickly remove it," the company said. "We are investigating this matter to determine what was behind the appearance of this autocompletion."

Google’s issues with inappropriate content goes far beyond the search bar. The site’s featured snippets tool — the highlighted answers given top billing on certain searches — erroneously promoted conspiracy theories, racism, and flat out lies up until last spring.

Though Google claims that its proposed fix will have a profound impact on its ongoing prediction issues, the company’s lengthy list of past failures will likely be difficult to overcome.

“Our existing policy protecting groups and individuals against hateful predictions only covers cases involving race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity” wrote Google public search liaison Danny Sullivan. “Our expanded policy for search will cover any case where predictions are reasonably perceived as hateful or prejudiced toward individuals and groups, without particular demographics.”