The Future

The problem isn’t whether people can identify fake news

It’s whether they feel like it.

The Future

The problem isn’t whether people can identify fake news

It’s whether they feel like it.
The Future

The problem isn’t whether people can identify fake news

It’s whether they feel like it.

Americans across the political spectrum trust mainstream outlets like CNN and the Wall Street Journal more than fringe sites like Breitbart and Infowars, according to a working paper released this week by Yale researchers. There are partisan differences, to be sure — Republican-leaning respondents reported much higher levels of trust in Fox News, for instance — but the overall message was clear: news consumers are often more capable of evaluating the trustworthiness of media outlets than they’re given credit for. What this also seems to confirm is that the ability to identify what is trustworthy or not may have never been the real problem. Rather, it’s the willingness to engage.

In an era of widespread concern about the spread of misinformation on Twitter and Facebook, media critics framed the results as a heartening sign. Politico, Slate, and Nieman Lab agreed that the study, which has not yet been peer reviewed, showed that there might be hope for Facebook’s plan to poll its users about which news sources they trust.

But even though Republican-leaning respondents barely trust Infowars and Breitbart, as the results suggest, both sites still have enormous reach on social media. In other words, people often share information that, if they stop to think about it, they know is dubious — they just don’t take a moment, as they scroll through their news feeds, to engage critically about whether a claim is likely to be true.

“The greater the extent that people engage in analytic thinking and careful consideration, the more they consense on what’s true and what’s false, rather than the more extreme and partisan they get,” said David Rand, a professor of psychology at Yale and one of the authors of the study, in an interview with The Outline.

You don’t necessarily earn the currency of likes and shares by engaging in prosocial behavior. Ideas that flatter your followers’ existing ideologies get visibility, even if the particulars are sketchy or distorted, and many news sites across the political compass have built that serotonin boost into a viable business model. In balmier days, the same dynamic worked well for social media platforms like Facebook. The more people engaged with content — regardless of whether it was genuine news, a politically hoax or pictures of their friends’ dogs — the more time they spent on the site and the better it was for the social network’s bottom line.

“One of the major challenges here is that in a certain sense social media, and Facebook in particular, is really designed in an optimal way to discourage analytical thinking,” Rand said. “The basic environment of social media is not something that predisposes people toward thinking carefully about the content they’re interacting with.”

The design consideration for a platform like Facebook, then, is how to encourage users to stop and think about whether an item is true or if they just want it to be. The company has tried a variety of strategies to fight misinformation over the past year, polling its users about which news sources they trust and partnering with third-party fact-checking organizations. Those efforts, though, can easily backfire: people can sandbag an outlet in a poll because they disagree with it politically, and flagging a link as “disputed” can have the unintended consequence of further entrenching users in incorrect beliefs.

And now the company may have a new strategy. It emerged yesterday that Facebook is testing a “downvote” button users can use to quash inappropriate or uncivil comments — though the message Facebook displays when you click the button seems to imply that the feature may just be a rebranding of the existing “hide comment” option, and only influences whether the user sees the content, not how the content travels.

“My guess is people are going to be much more inclined to downvote things they don’t like instead of saving downvoting for things they think are actually false,” Rand said. “That’s a key distinction. We want to make the effort to somehow get people to discourage content that they actually think is false, and not just content they don’t like.”

While Facebook can’t be absolved from the way it’s influenced major political events, it still seems prone to suffering from stereotypical Silicon Valley optimism about people’s inclination to always do the right thing. Companies like Twitter or Facebook are perennially stymied that anyone would use the platforms to, say, recruit people to ISIS, or tweet swastikas and racist statements at other users, but they inevitably do.

The problem to solve is not a matter of intellectual ability; it’s intention and empathy.

Jon Christian is a contributing writer for the Outline. Follow him on Twitter: @Jon_Christian.