The Future

Google: we promise not to use our vast and unchecked power for murder

Google has laid out “principles” about how it will approach AI, as if what a corporation promises matters.

The Future

Hmmm

The Future

Google: we promise not to use our vast and unchecked power for murder

Google has laid out “principles” about how it will approach AI, as if what a corporation promises matters.

Like Twitter’s dedication to improving conversational health, or Facebook’s obsession with time well spent, Google blogged on Thursday about its commitment towards developing ethical artificial intelligence technology. The announcement centered around seven key principles designed to keep Google in check while developing immensely powerful technology that’s ripe for exploitation It sounded good, but meant nothing.

The issue wasn’t the message itself, of course — any tech company publicly admitting to having some principles is a net good — but it glossed over the, uh, minor fact that the entity that crafted these rules and regulations for Google was Google itself, while the company’s actions are still so under-regulated.

In publishing a memo like this, Google isn’t taking some bold stance or even actually committing to uphold any particular belief, but just trying pull itself out from under its most recent scandal. Namely, its contentious decision to partner with the Department of Defense in order to help the military analyze drone footage using AI. The whole affair gave people a brief glimpse of what Google really is — a terrifyingly large and powerful tech company that does things for money — cracking the benign, consumer-friendly mask the company usually dons.

Google had to make a statement on the matter, but not one with any actual ramifications or meaning. Legitimate regulation or checks on its power do not fit with the business strategy; the blog served to soothe customers and employees on a surface-level, and not much else. Google has already proven time and time again that it’s not going to be held accountable to past declarations (remember when Google quietly removed “don’t be evil” from its code of conduct last month?).

This isn’t just a Google problem. Big Tech has an image problem that it’s scrambling to solve across the board. All of the groaning about user privacy has got companies worried that one day, if hit with enough scandals, consumers could actually start jumping ship. After each and every high-profile fuck up, CEOs from Facebook, Twitter, Google, and the like all promise to everyone from their customers to Congress to do everything in their power to do better next time, to be more proactive in their forecasting of ethical conundrums and privacy missteps. But few (if any) ever actually take the steps necessary to make this a reality. And why would they? Stated values from a corporation are utterly meaningless at best, and a blatant oxymoron at worst. Companies like Google are beholden to no one, save, perhaps, their shareholders. And nothing short of government regulation will change that.