There were two reactions to come out of this weekend's revelation that Cambridge Analytica, a voter-profiling firm enlisted by the Trump campaign, had accessed 50 million Facebook accounts’ worth of data to, in the words of The New York Times, “identify the personalities of American voters and influence their behavior.” Many outlets and individuals referred to the access as a “breach” of Facebook’s data walls, an illicit action that took advantage of Facebook without its knowledge. But the collection and use of the data was incredibly normal. Facebook knew it, and didn't make a secret of it.
The reaction of Facebook’s own staff was one of light exasperation, as if explaining a mundane fact of life to children. Though Facebook itself issued a self-serious PR statement saying it had banned Cambridge Analytica from its platform, one of its VPs took to Twitter to explain that the data the company accessed was freely provided by Facebook itself. Alex Stamos, chief security officer at Facebook, explained in a series of now-deleted tweets:
“Kogan [one of Cambridge Analytica’s researchers] did not break into any systems, bypass any technical controls, or use a flaw in our software to gather more data than allowed. He did, however, misuse that data after he gathered it, but that does not retroactively make it a ‘breach’.”
“At the time of this quiz [app written to collect data], the Facebook API allowed app developers to see a larger portion of the data available to a user than we do now. This included names and likes from friends, There were privacy controls available to both the user of the app and their friends. The ability to get friend data via API, with the permission of a user, was documented in our terms of service, platform documentation, the privacy settings and the screen users to login to apps.”
The problem with the situation that Stamos described is that it elides yet another situation: that of how users’ data is guarded, or in more cases, collected, used, and sold, with their legal permission, if only their dim awareness. All manner of sins in our modern tech-driven society can be covered by a single clause that notes “anonymized” data “may be shared” with “third parties.” Such a clause appears in most terms of services of the apps and platforms we use, with no opportunity to opt out or otherwise control our data; to use Facebook or Google is to conscript our information into service to “pay” for our use of them. It’s always been true that nothing is free, and if there’s no product to buy, you are the product. The great achievement of the tech behemoths has been how easy they’ve made it to forget that.
Facebook was not alone in its calm, quietly frustrated reaction. Many Silicon Valley dwellers and people who follow the industry reacted similarly, so annoyed that some seemed to think the outrage was disingenuous — of course these companies are using your data like this.
Am I the only one struggling to find the scandal in this morning's Cambridge Analytica reporting?— Aron Pilhofer (@pilhofer) March 17, 2018
Yes, there was a leak. But not by Facebook.
Yes, they used data to profile voters. So has just about ever campaign for 25 years.
I literally cannot find anything else of substance
Also *ahem*. A good number of us had been objecting and pointing out the broader harms of the interaction between Facebook's business model and politics even when it appeared to benefit Obama/Democrats. pic.twitter.com/SVk4Iw86uG— zeynep tufekci (@zeynep) March 17, 2018
It is well-documented that policymakers have been incredibly slow to protect people from predatory data use, in part because most are tech-illiterate, and in part because the effects are so abstracted and difficult to measure. (Europe is doing a somewhat better job than the U.S., but tech companies are so powerful and well-resourced, and move so quickly, that it’s almost impossible for a mere government to pin them down.) Most companies that trade in the sale and manipulation of personal information are private and beholden to few rules other than the bare minimum of those they establish themselves, to avoid scrutiny and be able to say “we told you so” if an angry individual ever comes calling. Even if a consumer is aware their data is being passed around, their ability to control it once it’s out there is virtually nil: if they request it be deleted from one data broker, it can simply be bought back from from one of several gigantic firms that have been storing it, too.
It is an open question what the actual effect of Cambridge Analytica’s work on the presidential election was, and what the outcome might have been without its influence (most references to its “psychographic” profiling in The New York Times’ story are appropriately skeptical). It would be hard to say without a lot more cooperation from the company and Facebook itself. But the leak by one of its researchers is an incredibly rare glimpse into a fairly routine process in an industry that is so staggeringly enormous and influential, not just in politics but in our personal, day-to-day existence, that it’s difficult to believe that it is anything but a mistake. But it isn’t, and wasn’t, a mistake. It is how things happened and are still happening every day.
I’ve written about privacy issues on and off throughout my career, and I’d like to think if I were better at it, people would care by now. But it’s profoundly difficult to get any normal person properly riled about it, because there is nothing to point to that shows them what the troves of personal data held by businesses and marketing companies can do. Many issues don’t get the coverage and reaction they deserve because there is no effect; no visual, concrete element. It seems too easy to look at the Cambridge Analytica leaks and then blame Facebook for Trump. But when this incident is added to a pile of other election-related scandals where poor beset little Facebook yet again finds itself at the center, it’s becoming harder for everyone to ignore the connection.