The Future

Why do we get sad when robots die?

It’s not about seeming “human” — it’s about the work machines do for us.
The Future

Why do we get sad when robots die?

It’s not about seeming “human” — it’s about the work machines do for us.

2019 was less than a week old when Elon Musk narrowly avoided being sued for murder. Broadly speaking, he was responsible for a Tesla killing somebody. But the (ridiculous) details matter: the Tesla Model S in question was in self-driving mode, and, more importantly, the person he hit, Promobot, was a robot. Putting aside that Promobot’s company, also called Promobot, likely set up the whole incident as a hoax, I have to ask whether, and in what sense, we can “kill” or cause harm to an artificially intelligent system, and as robotics technology becomes increasingly sophisticated, what it is that sets human life apart from a robot at all? Why do I have to ask this? Because I am a philosophy graduate student.

It is well-documented that humans’ feelings about AI’s are… complicated. In 2013, American soldiers in Taji, Iraq held a 21-gun salute funeral, replete with the bestowing of a Purple Heart and a Bronze Star for a robot known simply as Boomer (he/him pronouns, obviously). Boomer was a MARCbot, a shockingly inexpensive (read: roughly $20,000) brand of robot used for the vital function of detecting bombs. That Boomer was personified in “life” and “death” is especially striking when contrasted with familiar military dehumanization techniques. We talk of strikes, not killings; collateral damage, not civilian death. We certainly did not hold funerals for the thousands of Iraqi civilians who died the same year that Boomer did.

A cynical read on this disparity would be that soldiers saw Boomer as more human than nameless casualties. While this could explain why Boomer got a funeral, we might instead think that it was funeral-worthy because of its value to humans and human-centered projects. The funeral is held for Boomer honoring what it was able to do for us, not because Boomer was the sort of thing worth honoring in its own right. But if this is right, what could we say of other vital objects, like an office water cooler? They have value to humans, and yet a funeral would strike most of us as bizarre.

Boomer would then be contrasted with humans, whose lives are honored and valued not (just) because they’re useful to other people, but because they possess “rational capacity” (i.e., the ability to think logically). Boomer might be able to do stuff once we program it, but humans can think for themselves; this is why we value them and not just what they do for us.

The philosopher Jeff McMahan takes a further step and proposes a tiered model of moral status based on psychological capability. The more sophisticated a being’s rational capacities, the more value your life has (up to a given threshold, above which all persons are equal). According to McMahan, humans are in a higher tier along with animals that demonstrate high intelligence, such as dolphins, octopi, etc.

Although McMahan does not directly address AI in his work, such a tiered model would rank AI, if sufficiently advanced, above certain human beings. Cognitively complex AI’s would be valued for its (their?) own sake. Even the Silicon Valley types who salivate over the opportunity to twist open a Soylent and explain the awesome power of the singularity have reason to be concerned with this outcome. Elon Musk has described sufficiently advanced AI as an existential threat to humanity, perhaps explaining the idea behind his girlfriend Grimes’ maybe-ironic song about bowing to our robot overlords. While other experts doubt that AI intelligence can ever surpass human intelligence, McMahan’s argument adds a new dimension missing from this discussion. It is not just a question of whether robots can overtake humanity by intellectual force, but rather whether we will owe them their freedom and whether they would even owe us the same.

McMahan’s way of thinking does make a certain amount of sense when applied to AI: if what we value about human lives is ultimately their rational capacities, then there are going to be some non-humans that end up getting slated in the top tier. But applied to people, this tiering of psychological capability has lead McMahan to some highly questionable conclusions, particularly around sexual assault and culpability.

Now, this is a big “if,” one that was perhaps best tackled in the pop-culture sphere on The Good Place, the most annoyingly very-much-about-philosophy show on NBC. Think of the scene from the first season, where the protagonists have decided to “kill” Janet, an omnipotent, omniscient robot-like being who manifests as a human being. Janet assures them she can feel no pain, since she is not human, but, as a failsafe, if anybody comes close to killing her, she has been programmed to beg for her life through the most human-like display of emotions conceivable. They have no problem approaching the kill switch as she assures them she will feel no pain, but as she starts screaming and pleading for her life, they can’t bring themselves to go through with it. What happened here? Did they suddenly remember that Janet is an omniscient, omnipotent being with rational capacities running Jeremy Bearimies around our own? Or was it her emotional display that made them less willing to take her life?

Rational capacities be damned, it was the AI’s getting hungry that made participants more willing to kill anonymous strangers in their place.

Similar results came out of a study published in the academic journal Social Cognition, questioning whether human beings would save robots instead of “anonymous human lives,” should the robots show sufficient emotional capacities. Indeed, humans happily sacrificed robots to save human lives when the robots looked and acted like robots, but the more the robots looked and acted like humans, the more human beings were willing to sacrifice unknown humans in order to save their robot pals. Rational capacities be damned, it was the AI’s getting hungry that made participants more willing to kill anonymous strangers in their place. The study’s authors ultimately concluded that the more robots demonstrate emotional capacities, the more likely human beings are to attribute moral status to them — in other words, to think that it is wrong to harm the robots.

If this study is right, then it suggests that intelligence has very little to do with why we care about AI. This isn’t just one study, either. The literature on how humans view robots is nascent and disjointed, but it’s growing: several studies by Dutch Industrial Design professor Christoph Bartneck, dating back to 2007, have argued that giving an AI a physical body makes people less likely to harm it, a conclusion replicated by German researchers in 2018.

If researchers indeed have proof that humans might one day see robots as ‘more human’ than humans, that would be front-page news. This might explain why Science Daily, in summarizing the Social Cognition anonymous humans study , ran with “Robot saved, people take the hit,” as its headline. This is an overstatement: it is not clear what conclusions about the relative value of human and robot life we should draw from how people behaved in an abstract thought experiment. Additionally, we should consider that these experiment subjects were college undergraduates, and the robots in the study were presented as white dudes.

What mattered is that these objects and robots took on the features of being human, by performing tasks so vital to the people who depended on them.

What we can say is that a human’s emotional attachment to a machine is dependent on context, but that level of attachment can change if that machine is humanized in some manner. The example of Boomer, as well as other machines like Opportunity, the Mars rover that was terminated this past February, falls within the scope of the first category. Why was Boomer given such an elaborate funeral? Because humans developed a particular kind of social relationship with these machines, and the mourning of their loss is understandable within that given context. Boomer “served” on the battlefield for a long time, and war always creates weird bonds. The same can be said for Opportunity, the Mars Rover that after 14 years “died” in a ferocious dust storm and was honored by a final transmission of Billie Holiday’s “I’ll Be Seeing You” from its NASA crew back on Earth.

Neither Boomer nor Opportunity had a particularly “human” appearance, and humans’ attachment to them was not grounded on being physically recognized as such. What mattered is that these objects and robots took on the features of being human, by performing tasks so vital to the people who depended on them.

The flipside of this is is perhaps a more familiar sentiment, that people can ascribe human qualities to inanimate objects in front of them while disregarding the lives of humans who appear to them as if off-screen. Consider how Trump talks of caravans of illegal immigrants and not families seeking aid, or Obama’s talk of “overcoming” terrorism while deliberately obscuring his administration’s own policy of drone murder.

Much remains unclear about how humans will relate to highly intelligent robots, especially those designed to resemble humans. But once AI’s do look and act more human, in the sense of experiencing and physically conveying feelings and desires, then people may treat them as more human than they would if they were clearly robots.

So if we shelve the click-baity question of “would you kill somebody to save Siri?,” we can ask ourselves something a bit more honest: What qualities make humans valuable for their own sake? Science alone won’t be able to answer that.

JJ Lang is a Ph.D candidate in philosophy at Stanford University. Previously, he wrote for The Outline about the concept of dogwhistling.