Back in 2013, when Soroush Vosoughi was a graduate student at MIT, he’d just left the lab when news of the Boston Marathon bombing broke. As the city entered a lockdown while authorities hunted the killers, he remained glued to Twitter, searching for updates.As the week wore on, and media attention turned to the many inaccuracies that had surfaced on Twitter and Reddit in the wake the carnage.
Vosoughi found himself questioning the role social media had played in the tragedy. He ultimately asked his adviser, Deb Roy, whether he could change the topic of his thesis to study how misinformation spreads online.
This week, Roy and Vosoughi — now an MIT postdoc — published the latest in a series of research projects related to that switch, a broad-reaching paper in the journal Science that draws a grim conclusion: on Twitter, false info spreads far more rapidly than real news.
“I don’t think it’s because of Twitter and other social media platforms, but because of something that has always existed in human nature” Vosoughi said. “People like to pass gossip and rumors around. It’s just that it’s now amplified.”
To tackle the problem, they examined at a vast set of data: millions of tweets, made between 2006 and 2017 and provided by Twitter, in which users shared news stories. Using six independent fact-checking organizations including Snopes and PolitiFact, they sorted the stories into real and false categories — and then, diving deep into the data, examined which group spread faster and further across the Twitter.
The results were bleak. Overall, false news stories are 70 percent more likely to be retweeted than true ones, and it takes a true story six times as long to reach its first 1,500 viewers. Vosoughi and his collaborators suspect that has to do with the psychological content of false news, which tend to evoke surprise and disgust — strong reactions, they believe, that encourage viral sharing.
It’s not entirely clear what can be done to improve the flow of reliable information through social media. Vosoughi doesn’t think censorship of incorrect information is the answer. Instead, he suggests that platforms like Twitter could add information around viral content and accounts that showed users whether they had a track record of sharing dependable reports — and then let them make up their own minds.
“You can think of it kind of like making caloric information available to people so that they can make decisions as to what food to buy,” he said. “This is similar: making information available to people so they can decide what to share and what not to share.”