Power

Influence? In this economy?

Tech-savvy young people take internet literacy for granted, and that can cause all kinds of problems.

Power

Influence? In this economy?

Tech-savvy young people take internet literacy for granted, and that can cause all kinds of problems.
Power

Influence? In this economy?

Tech-savvy young people take internet literacy for granted, and that can cause all kinds of problems.

The most revelatory thing about l’affaire Devumi, the January 27 New York Times exposé on the shadowy company making millions inflating the follower counts of various online influencers, was not the news that such a company might exist. Nor was it that the company facilitated the hijacking of real Twitter accounts, the cloning of others, and the creation of new Twitter users wholesale. Nor was it the fact that such a company would claim to have a Manhattan address but actually be located in South Florida. And, given that the report liberally quoted teens whose online identities had been pilfered by Devumi, it surprised exactly zero people to find the phrase “Post Malone” in the opening paragraph.

No, the real kicker was the reaction of those inside the media, who were shocked — shocked! — to find out that the little numbers that measure their influence may not be 100 percent accurate. Second-tier media figures and actors — along with “over a hundred” people who the Times called “self-described influencers” — were among the most flagrant offenders, propping up their follower counts to jump a few rows in the house of cards that is internet attention. Many of them blamed overeager assistants for buying the fake followers, or simply deleted their accounts.

The asymmetry between the costs of sending countless messages and the likelihood that at least one recipient will take their validity at face value is fundamental to the logic of online trolling.

Only a Hungary-based expat pornographer known as “Porno Dan” was forthcoming with what he thought was going on, telling the Times, “Countless public figures, music acts, etc. purchase followers. If Twitter was to purge everyone who did so there would hardly be any of them on it.”

This wasn’t even the first time that a major publication with “New York” in its name published an account of the ease of gaming follower numbers. A little over a year ago, comedian Joe Mande described his effort to break a million Twitter followers in the shadiest way possible for The New Yorker. The story ends with Twitter CEO Jack Dorsey inviting Mande to give a talk at Twitter, driving home the fact that, yes, Twitter knows exactly what’s going on.

One of the myriad Conversations around the internet concerns the idea of filter bubbles: that people inadvertently tailor their online media diets to flatter their pre-existing views. While the academic jury is still out on whether a person getting their news from social media makes them any more or less partisan than those who sought out right-wing talk radio or lefty political journals a generation ago, there exists another filter bubble that, while it’s received relatively little public attention, affects the majority of people reading this article: The average producer of online content is far more internet-savvy than the average consumer of online content. While it’s always been expected that journalists and intellectuals be more knowledgeable about their beats or fields of study than the general population, the internet (and specifically Twitter) has a universalizing effect, making it easy to assume that because you and everyone in your circle is perpetually Online that everyone else is, too. Chuck Klosterman once retweeted an observation I made about Clinton voters being predisposed to watching Family Guy; in that moment, I felt like Tom Brokaw. The issue, of course, is that a person Tom Brokaw’s age would likely find my previous sentence incomprehensible.

I came to realize this realization by accident. In 2015, I conscripted an army of Twitter bots for an academic study meant to measure the degree to which healthy speech norms could be enforced online. One of the variables I wanted to test was whether or not a person’s perceived social status would affect the gravity with which their words were treated. In other words, some of my humble bot warriors would need to look like they had a lot of followers, and that meant I needed to google the phrase “buy Twitter followers.” The first result that popped up was Devumi. The site was discreet, fast, and cheap — very cheap. In the summer of 2015, 500 Twitter followers (the smallest package they offered) set you back a dollar.

The results were striking. High-status bots — the ones who I’d purchased followers for on Devumi — caused a significant behavioral change. However, tweets from low-status bots (with around ten followers) had no effect.

When presenting this work at academic conferences, colleagues kept offering the same critique: couldn’t people tell that these were fake accounts with fake followers? How could people be so easily fooled? Part of the answer is that it’s strikingly simple to create realistic-looking accounts on Twitter. Even my bots, which were constrained in that I had ordered them to tweet exactly the same message, over and over again, were believable enough that subjects responded as if they were real.

But also, I believe that there was something deeper at play. The way that people present themselves and the way that ideas spread is radically different online than in the physical world. Understanding this is more intuition than reason; for many people who grew up decades before the internet, this subtle difference in norms often cannot be learned. Older Americans are at the highest risk for news fraud (and indeed for all forms of internet fraud): a recent study that used web tracking data showed that politically interested Americans over 60 were the most likely to choose to read fake news, either pro-Clinton or pro-Trump, during the 2016 election.

This “savviness gap” creates a space that trolls exploit for fun and profit. The crucial break from real-world communication is that online messages are essentially free to send and anonymity removes any reputational costs of saying rude or spammy things. Whether we’re discussing blunt-force DDOS attacks meant to overwhelm a server until it crashes or personalized phishing scams, the asymmetry between the costs of sending countless messages and the likelihood that at least one recipient will take their validity at face value is fundamental to the logic of online trolling. Manipulating online discussions by spamming hashtags and engaging in bad-faith arguments with armies of fake accounts falls somewhere in the middle of this spectrum. Accounts run by a Kremlin-affiliated organization called the Internet Research Agency, for example, were found to have retweeted Donald Trump nearly half a million times (compared to a scant 50,000 retweets for Hillary Clinton) in the build-up to the 2016 election, and had deployed bots to stir the pot on both the left and right edges of politically charged hashtags. And sure enough, on Friday, Special Counsel Robert Mueller filed charges against the Internet Research Agency and ten of its employees for conspiracy and fraud related to attempts to sway the election. It’s easy to fool people who want to be fooled.

In trying to understand how so many people can be fooled by obviously fake messages, journalists and academics who live online should think seriously about companies like Devumi. Partisanship imposes a perceptual screen when we consume information, and the more we imbue Online with meaning, the more meaning it has for everyone else, deepening the incentives to deceive. We all post; the payoff for posting online is measured in retweets and favorites, unique impressions and shares. Although the issue of fake Twitter accounts has been plainly obvious for years, we’ve been conditioned by self-interest and groupthink into believing that all of these internet metrics are real. After all, it’s easy to fool people who want to be fooled.

Kevin Munger is a PhD Candidate in the Department of Politics at NYU and a member of the University's Social Media and Political Participation lab. He will begin a position as an Assistant Professor of Political Science at Penn State University in Fall 2019.