Thousands of artificial intelligence developers and researchers — including Elon Musk, Google DeepMind co-founder Demis Hassabis, and Google Machine Intelligence head Jeffrey Dean — just signed a “Lethal Autonomous Weapons Pledge,” vowing to resist delegating the decision to murder in a military context to a machine. On its face, this pledge seems like a step in the right direction, a recognition of the concerns of tech employees. But here’s the main problem with this pledge: the top drone manufacturers for the U.S. military — including but not limited to Northrop Grumman, Boeing, General Atomics, and Textron, which make up 66 percent of the U.S. drone military market — did not sign on to the contract.
“We will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” the pledge reads. “We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”
For years, the Department of Defense (DoD) has sought out new military tech from Silicon Valley and tech companies to be deployed in warfare. However, employees for tech companies like Google, Amazon, Microsoft, and IBM have recently started to express strong resistance and opposition to the use of their companies’ technology in a military context. The vast majority of the Lethal Autonomous Weapons Pledge’s 2,400 signatories are scientists and academic researchers. Northrop Grumman and Boeing aren’t exactly AI software giants, but their absence as signatories leaves the significance of the pledge — which involves the trade and use of autonomous tech, not just the creation of it — incomplete.
The pledge also doesn’t cover other military-adjacent, unethical uses of artificial intelligence technology which may not result in murders, but harm people all the same. Companies like Amazon sell cheap facial recognition tech to police departments, automating the process of law enforcement surveillance. Companies like Microsoft also have historically cozy relationships with ICE. Last year, ICE recruited top tech companies to continuously surveill and predict bad behavior among people who cross the border in what it called the “Extreme Vetting Initiative.” ICE abandoned the process due to technical feasibility, not ethical concerns.
The Future of Life organization, which created and orchestrated the pledge, is a non-profit research organization aimed at cultivating the ethical development of new technology. But compared to, say, the UN, this organization has no real bargaining power or methods for enforcing a violation. At best, a company that violates that violates the pledge would just look bad. The DoD doles out millions of dollars in contracts to companies to develop high-tech weapons and weapon detectors. If the DoD offers a pricey contract, a company could be strongly tempted to break the pledge.
Granted, bad press and employee resistance do have the potential to initiate real policy change within companies with military contracts. Back in March, Gizmodo reported that Google was developing system for the DoD that would use machine learning to analyze drone footage and identify potential targets. The once-secret initiative, internally called “Project Maven,” elicited a series of employee protests and even resignations. In June, Google opted to not renew its Project Maven military contract in 2019.
The company has since said that it will not use artificial intelligence for military weapons or surveillance, and now, a handful of top-tier employees signed an ethics pledge.
Drones unnecessarily kill hundreds of civilians, but we don’t know the exact number, as official figures underestimate the death tallies, and drone strikes have limited accuracy. The idea of delegating the act of killing away from people is a tempting prospect. But for companies, there’s a line between saving lives and becoming complicit in a tech-driven, twenty-first century military industrial complex.