If you've ever taken an undergraduate philosophy class, or just seen enough episodes of The Good Place, you'll almost certainly be familiar with the ‘Trolley Problem:’ the classic dilemma in which you must ask yourself, in a split-second situation, should you, the station master, allow a runaway train to kill five people — or is it better to switch the tracks, intentionally killing one? Typical classes on the Trolley Problem gauge students’ intuitions about what would be the ethically right thing to do in a bunch of different scenarios — for instance where the one person who would die is young and the five people are old, or (in one variant that is, frankly, blatantly fatphobic) the one person who would die is a “very fat man” who you have to push off a bridge to stop the train.
When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.
But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs.
Last October, the journal Nature published an article entitled “The Moral Machine Experiment”. In short, the article published the results of a massive worldwide experiment run by MIT’s Media Lab to determine what, in various Trolley Problem-style scenarios, a self-driving car should do. The write-up in Nature focuses mainly on cross-cultural differences (respondents in Asia were more likely to say self-driving cars should swerve to avoid old people, for example), and is perhaps most interesting in providing evidence for the cultural relativism of moral intuitions. But the article came to a lot of people’s attention (including mine) as the result of an utterly chilling chart, shared in a tweet by the World Economic Forum.
The Nature article doesn't actually argue that self-driving cars should be encoded solely with the moral intuitions of the people who’ve answered the quiz, just that these intuitions should be taken into account. But the implications of the chart are obvious. People favor the “athletic” over the “large” (more evidence that people engaging with the Trolley Problem think killing fat people is basically okay); high-status individuals (“doctors,” ”executives”) over the homeless. “Criminals” fare worst of all, with their lives being deemed slightly more important than cats, but less so than dogs. Just how exactly is an AI supposed to recognize a “criminal” anyway? Eye mask, stripey top, big bag with a dollar sign on it? Would it be surprising if the algorithm ended up falling back on, you know, racism? Will black people have to walk everywhere with strollers to avoid self-driving cars trying to ram into them deliberately?
The Trolley Problem case is far from the only indication that as AI technology develops, it may be driven more and more by the prejudices of the human beings responsible for programming it. Around the same time last year as the Nature article appeared, Amazon announced that it would be scrapping plans to use AI to automate its hiring process. By training an AI on past hiring data, the company had hoped to develop “an engine” that would “spit out the top five” of 100 resumes for it to hire.
But, because the tech industry has been so historically dominated by men, the AI effectively ended up learning that it was not desirable to hire women. The system penalized resumes that included the word “women’s” (as in “women’s chess club captain”), and downgraded graduates of two (unnamed) women's colleges. Even after researchers tweaked the algorithm to correct for this, they were “unable to guarantee” that the AI “would not devise other ways of sorting graduates that could prove discriminatory” — a sentence whose euphemistic feel leads me to believe that the AI in question absolutely did manage to devise other ways of discriminating against women.
Clearly the tech industry, the sources of which continue to dominate coverage of AI, wants people think that AI is going to turn out to be a brilliant solution for all sorts of problems, from drug discovery to the provision of coffee. This optimism is even echoed by some people on the political left: see the utopian vision of “fully automated luxury communism” in which AI technology allows us to almost completely eliminate labor.
But with all these heralds of the soon-to-be-triumphant power of AI has come a certain paranoia: fears that AI will be used, not only to cement existing injustices, but to produce new ones. Suspicions that AI is being used to spy on us appear to be coming to a head: see recent speculation that the “10-year challenge” meme was a ruse concocted to help train facial recognition algorithms.
I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own. But weirdly, the Trolley Problem and Amazon hiring-algorithm cases make me think that maybe there could be an upside.
In his final completed book, The Weird and the Eerie, the cultural theorist Mark Fisher identified the second of his two title concepts as being “fundamentally tied up with questions of agency.” “A sense of the eerie,” he claimed, is found most readily in “landscapes partially emptied of the human. What happened to produce these ruins, this disappearance? What kind of entity was involved? What kind of thing was it that emitted such an eerie cry? … What kind of agent is acting here? Is there any agent at all?”
One example Fisher gives of an eerie landscape is Sutton Hoo, a collection of Anglo-Saxon burial mounds a few miles walk from his home in Felixstowe on the eastern coast of England. Another is the port of Felixstowe — Britain’s largest container port — which was founded in the 1960s adjacent to what had been a faded seaside resort town. Sutton Hoo is eerie, at least in part, because “it constitutes a gap in knowledge. The beliefs and rituals of the Anglo-Saxon society that constructed the artifacts and buried the ships are only partly understood.” The port, meanwhile, has an uncanny feeling because of how heavily automated the process of shipping has become. “There's an eerie sense of silence about the port,” Fisher writes, “that has nothing to do with actual noise levels. The port is full of... inorganic clangs and clanks that issue from ships as they are loaded and unloaded.” But it has been completely emptied of “any traces of language and sociability.”
“The contrast between the container port, in which humans are invisible connectors between automated systems, and the clamor of the old London docks, which the port of Felixstowe effectively replaced, tells us a great deal about the shifts of capital and labour in the last forty years. The port is a sign of the triumph of finance capital; it is part of the heavy material infrastructure that facilitates the illusion of a ‘dematerialized’ capitalism. It is the eerie underside of contemporary capitalism's mundane gloss.”
For Fisher, “Capital is at every level an eerie entity: conjured out of nothing, capital nevertheless exerts more influence than any allegedly substantial entity.” And yet it does so in ways we don't understand — in ways that are, often, not even transparent to the people who most benefit from capitalism financially (it constitutes a gap in our knowledge). Adam Smith’s famous term ‘invisible hand’ is a symbol of the eeriness of capital: pushing us around, as if from nowhere, making it easy to naturalize our economy’s deep injustices. If we can't even accurately point to our oppressor, how can we ever hope to effectively resist them?
But AI, against its inventors’ intentions, seems like it could help us to solve this problem of agency. They almost certainly don't realize that they’re doing this, but effectively: the people who are designing AIs are, for the most part, attempting to automate existing power relations. In doing this, they are making them visible. As a hiring tool, Amazon’s HR algorithm was a dismal failure. But as a parody of what Amazon were already doing, it was a triumph. Just as when the MIT researchers found that people would be happy for self-driving cars to be programmed to hit ‘criminals:’ the human species may as well be designing these robots expressly to perform a cruel impression of us.
Imagine that AI technology is widely rolled out, just as Silicon Valley seems to want it to be. Our world is thus made infinitely worse. All the injustices our present system is producing are cemented, amplified by automation. But this then means that the public, as a whole, is able to become aware of two things.
The first is that what is happening is bad. This is something people both can and should be aware of right now but, historically, the powerful have been able to deflect responsibility for what they are doing by confusing agency and nature: if it’s anyone’s fault, it’s not theirs; if they’re the ones doing it, then they’re really sorry, but this is just how things naturally have to be. In an AI-dominated society, however, this excuse can stand revealed for the piss-weak switcheroo that it is. We’ll all know what’s doing it: it’s the AI! And so we can never be fooled into thinking that any of the injustices the AI is perpetuating are natural (well, notwithstanding a few chuds who insist on spouting infantile babble like “math can’t be racist” ): the AI is, after all, a human creation. And so people can become aware of a second thing — that this can change.
A new Trolley Problem presents itself, perhaps. The train, automated, is now governed by an AI. There is no station master any more. But neither has the train malfunctioned: it is hurtling towards the people on the tracks for no better reason than that this is what it has been programmed to do. The situation could seem hopeless. But if all six people worked together to switch off the AI, then perhaps none of them would need to die after all. In the end, attempting to discover this outcome has always been the only acceptable way of responding to the thought experiment.