Earlier today, Facebook released an update on the audit of third-party Facebook apps it promised to do after The Guardian and the New York Times revealed Cambridge Analytica harvested 87 million people’s personal information without their (or Facebook’s) permission. The update claims that Facebook has investigated thousands of apps hosted on the site and has suspended about 200 of them.
This sounds nice enough on its face. But banning an app from Facebook’s ecosystem is not the same as recovering or forcing developers to delete the information they took, let alone tracking down who those developers might have shared information with. The exact scale of user information that’s been taken and shared by Facebook app devlopers since the platform’s inception—made possible by leaky and/or poorly enforced app developer policies—may never be known. An audit won’t change this fact.
In fact, New Scientist reported today that myPersonality, one of the 200 apps Facebook recently kicked off its platform, gathered information from more than 6 million Facebook users and created a robust personality profile for over 3.1 million users.
myPersonality was a data-driven corporate micro-targeting project, which, like Cambridge Analytica, came from the minds of the University of Cambridge’s Psychometrics Centre. In fact, before researcher Alexsander Kogan became involved with the Cambridge Analytica project, he was involved with the myPersonality project for a period of time while it was active from 2007 to 2012.
Lead researcher David Stillwell actually told New Scientist that myPersonality experienced a breach. (Stillwell’s Cambridge email has been taken down and he did not immediately respond to The Outline’s request for comment on Facebook). But Stillwell did not clarify exactly when the breach happened, how much information was shared, who may have gotten access to this information, or what this information could be used for. Even if the data breach involved anonymized data, it’s possible for people with enough information to re-identify them—linking very sensitive information directly to them.
So let’s say a user wants to find out whether myPersonality had access to their information. Facebook’s audit update does claim that the “How can I tell if an app may have misused my Facebook information?” page, released in March, will allow users to see if any of these other apps have also taken their information without permission. But at the time of writing, the page still only claims to test for the app MyDigitalLife, the app used by Cambridge Analytica to harvest personal data of Facebook users.
And perhaps more importantly, knowing the organization that extracted your information is far from equivalent from getting control over that information again. In the case of myPersonality, New Scientist reported that between the data breach and business-as-usual information sharing, there’s no feasible way to know or identify every party that has now accessed that information.
In general, it’s not obvious that there’s a clear path to identifying information-extractors and holding them accountable. For users, this means that there’s permanent consequences to any piece of information shared with a third party—consequences they’ll probably never know or understand in their entirety.
Per Facebook’s platform policy for app developers like myPersonality, parties are supposed to “Delete all of a person’s data you have received from us (including friend data) if that person asks you to, unless you are required to keep it by law, regulation, or separate agreement with us.”
But there’s no clear way to enforce this policy. Facebook claims that it will oust apps whose developers that break their rules. But what can, or will, Facebook do to go after the data that these parties already have? This still isn’t clear.
How other data mercenaries have used people’s information will be instructive—it’s ranged from political manipulation, to policing, to persistent micro-targeted ads. David Carroll, a professor of media design at The New School, said to The Outline that a research organization like myPersonality or Cambridge Analytica can still hold on to the predatory algorithms and tools they trained on user data and can continue to use them to target users with ads. There’s no way to guarantee similar situations won’t happen again.
It’s difficult to argue that auditing apps that made projects like myPersonality or Cambridge Analytica possible isn’t a step in the right direction. But Facebook has been aware of myPersonality and its methods of since at least 2011. It didn’t take action until this past April, as a part of this audit.
This is yet another in a series of countless examples that Facebook puts its business incentives before its users. And at F8, Facebook’s annual developer conference in early May, the company announced that it plans on continuing to collaborate with research institutions, without outlining any explicit guidelines for how these institutions will be examined and selected.
The Cambridge Analytica scandal was never just about Cambridge Analytica. It’s just one third party that’s been extracting and sharing user information, which Facebook has allowed, prioritized, and monetized since day one.