Bad answers

Google finally realized that racist search results are a problem

And did something about it.

Bad answers

Google finally realized that racist search results are a problem

And did something about it.
Bad answers

Google finally realized that racist search results are a problem

And did something about it.

Google just announced what it calls “structural changes” in how it evaluates the reliability of a web page, adjusting the secret dials and levers in its search ranking algorithm in order to “demote low-quality content.”

The move was in response to criticism that some — Google estimates .25 percent of its daily traffic — “have been returning offensive or clearly misleading content, which is not what people are looking for.”

Oh, like when Google said the Earth was flat and that Barack Obama was planning a coup and that women are evil? Or — sorry — when Google returned prominent but poorly labeled answers that appeared to be authoritative but were actually scraped from crap sites on the web? Or when it suggested that users typing “are Muslims” into its search bar might want to add “bad”? Or when a white supremacist website topped the search results for “Did the Holocaust happen?”

Yes! Like all of those. When Google says “content,” it’s including autocomplete, search results, and the direct answers that appear at the top of results and are read aloud when a search is performed by voice. Google says there are “trillions” of searches performed annually, which has been estimated to work out to 5.5 billion a day, which means at least 13 million searches a day would have returned offensive or clearly misleading content before this change.

The changes to the algorithm — which determines which sites get to the top of search results, and therefore which sites are scraped for the direct answers that appear in a box at the top of the page — are likely to be the most impactful of everything Google announced. They are also completely opaque. Google releases no detail about how its search algorithm works in order to prevent bad guys, spammers, and competitors from gaming it. That makes sense, but means it’s difficult to tell how big of a difference this will make.

Google estimates .25 percent of its daily traffic “have been returning offensive or clearly misleading content”

Danny Sullivan over at Search Engine Land did a test in light of Google’s announcement comparing Google’s results with Bing’s. For “was the Holocaust fake,” Google had three denial results out of its top ten, while Bing had six. This suggests that Google is getting more rigorous about filtering out hateful bullshit, but again, it’s difficult to tell — for one thing, Google says its changes haven’t proliferated everywhere yet.

Google also announced two other initiatives. The first is changes to its Search Quality Rater guidelines, which human reviewers use to evaluate Google’s algorithm in internal experiments.

The second is tweaking its approach to user feedback. Google added a feedback form for Autocomplete suggestions and published its (short) policy for removing content from Autocomplete for the first time. It also made some small changes to the feedback form for Featured Snippets and other direct answers.

The Autocomplete feedback form is new and looks like this:

Google added a feedback form for its Autocomplete suggestions.

The feedback form for Featured Snippets and other direct answers looks like this:

Google made changes to the way it presents a feedback form for users on its Featured Snippets answers, which come from the web.

We’ve covered Featured Snippets quite a bit here at The Outline. The first story we wrote was titled “Google’s Featured Snippets are worse than fake news.” If the search rankings get better, Featured Snippets — which are direct answers pulled from high-ranking sites on the web — will naturally get better.

Google has also added icons to make the feedback form more eye-catching and encourage users to click it more often. (The company declined to say how much it was used before, but my guess would be not a lot — the link is tiny and greyed out, and it’s unclear that it’s even a link or where it might go.)

This is what the feedback form on Featured Snippets looked like before the change:

And this is what they will look like after the change:

A Google Featured Snippet with the new icons for user feedback and explanation.

A Google Featured Snippet with the new icons for user feedback and explanation.

Google also added a few new options for feedback. The form used to say “What do you think?” with the option to respond with “This is helpful,” “Something is missing,” “Something is wrong,” or “This isn't useful,” and a section for comments.

Now it says, “This is helpful,” “I don’t like this,” “This is hateful, racist, or offensive,” “This is vulgar or sexually explicit,” “This is harmful, dangerous, or violent,” “This is misleading or inaccurate,” and asks for comments.

Sullivan reported that the criticism of Google’s results — including his own deep dive on what he called its “biggest-ever search quality crisis” — registered with the company, which started taking the issue seriously. “People [at Google] were really shellshocked by the whole thing,” Pandu Nayak, a Google Fellow who works on search quality, told Sullivan. But as of this writing, Google’s search results still credit the wrong person with inventing email.

Wrong Answer

Google’s featured snippets are worse than fake news

The highlighted answers given prime placement over search results are often shockingly bad.
Read More