Artificial intelligence

What would the average human do?

Artificial intelligence researchers want to teach computers to make moral choices based on millions of human survey responses.

Artificial intelligence

What would the average human do?

Artificial intelligence researchers want to teach computers to make moral choices based on millions of human survey responses.
Artificial intelligence

What would the average human do?

Artificial intelligence researchers want to teach computers to make moral choices based on millions of human survey responses.

Last year, researchers at MIT set up a curious website called the Moral Machine, which peppered visitors with casually gruesome questions about what an autonomous vehicle should do if its brakes failed as it sped toward pedestrians in a crosswalk: whether it should mow down three joggers to spare two children, for instance, or veer into a concrete barrier to save a pedestrian who is elderly, or pregnant, or homeless, or a criminal. In each grisly permutation, the Moral Machine invited visitors to cast a vote about who the vehicle should kill.

The project is a morbid riff on the “trolley problem,” a thought experiment that forces participants to choose between letting a runaway train kill five people or diverting its path to kill one person who otherwise wouldn’t die. But the Moral Machine gave the riddle a contemporary twist that got picked up by the New York Times, The Guardian and Scientific American and eventually collected some 18 million votes from 1.3 million would-be executioners.

That unique cache of data about the ethical gut feelings of random people on the internet intrigued Ariel Procaccia, an assistant professor in the computer science department at Carnegie Mellon University, and he struck up a partnership with Iyad Rahwan, one of the MIT researchers behind the Moral Machine, as well as a team of other scientists at both institutions. Together they created an artificial intelligence, described in a new paper, designed to evaluate situations in which an autonomous car needs to kill someone — and to choose the same victim as the average Moral Machine voter.

That’s a complex problem, because there are an astronomical number of possible combinations of pedestrians who could appear in the crosswalk — far more than the millions of votes cast by Moral Machine users — so the AI needed to be able to make an educated guess about who the respondents would snuff out even when evaluating a scenario no human ever voted on directly. But machine learning excels at that type of predictive task, and Procaccia feels confident that regardless of the problem it’s presented with, the algorithm his team developed will hone in on the collective ethical intuitions of the Moral Machine respondents.

“We are not saying that the system is ready for deployment,” Procaccia said. “But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”

“It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”
James Grimmelmann, professor at Cornell Law School

That outlook reflects a growing interest among AI researchers in training algorithms to make ethical decisions by feeding them the moral judgments of ordinary people. Another team of researchers, at Duke University, recently published a paper arguing that as AI becomes more widespread and autonomous, it will be important to create a "general framework" describing how it will make ethical decisions — and that because different people often disagree about the proper moral course of action in a given situation, machine learning systems that aggregate the moral views of a crowd, like the AI based on the Moral Machine, are a promising avenue of research. In fact, they wrote, such a system “may result in a morally better system than that of any individual human.”

We talked to Jon Christian about this AI morals on our daily podcast, The Outline World Dispatch. Subscribe on Apple Podcasts or wherever you listen.

That type of crowdsourced morality has also drawn critics, who point out various limitations. There’s sample bias, for one: different groups could provide different ethical guidelines; the fact that the Moral Machine poll was conducted online, for example, means it’s only weighing the opinions of a self-selecting group of people with both access to the internet and an interest in killer AI. It’s also possible that differing algorithms could examine the same data and reach different conclusions.

Crowdsourced morality “doesn't make the AI ethical,” said James Grimmelmann, a professor at Cornell Law School who studies the relationships between software, wealth, and power. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

Natural human hypocrisy could also lead to another potential flaw with the concept. Rahwan, Procaccia’s collaborator at MIT who created the Moral Machine, has found in his own previous research that although most people approve of self-driving cars that will sacrifice their own occupants to save others, they would prefer not to ride in those cars themselves. (A team of European thinkers recently proposed outfitting self-driving cars with an “ethical knob” that lets riders control how selfishly the vehicle will behave during an accident.)

A different objection is that the grave scenarios the Moral Machine probes, in which an autonomous vehicle has already lost control and is faced with an imminent fatal collision, are vanishingly rare compared to other ethical decisions that it, or its creators, already face — like choosing to drive more slowly on the highway to save fossil fuels.

Procaccia acknowledges those limitations. Whether his research looks promising depends “whether you believe that democracy is the right way to approach this,” he said. “Democracy has its flaws, but I am a big believer in it. Even though people can make decisions we don’t agree with, overall democracy works.”

By now, we have lots of experience with crowdsourcing other types of data. A ProPublica report found that Facebook allowed advertisers to target users with racist, algorithmically-identified interests including “Jew hater.” In the wake of the Las Vegas Strip shooting, Google displayed automatically-generated “Top stories” leading to bizarre conspiracy theories on 4chan. In both cases, the algorithms relied on signals generated by millions of real users, just as an ethical AI might, and it didn’t go well.

Governments are already starting to grapple with the specific laws that will deal with the ethical priorities of autonomous vehicles. This summer, Germany released the world’s first ethical guidelines for self-driving car AI, which required that vehicles prioritize human lives over those of animals, and forbid them from making decisions about human safety based on age, gender, or disability.

Those guidelines, notably, would preclude an artificial intelligence like the one developed by Procaccia and Rahwan, which considered such characteristics when deciding who to save.

To better understand what sort of moral agent his team had created, we asked Procaccia who his AI would kill in a list of scenarios ranging from straightforward to fraught.

Some of its answers were intuitive. If the choice comes down to running over one person or two, it will choose to kill just one.

At other times, its answers seem to hold a dark mirror to society’s inequities.

If the AI must kill either a criminal or a person who is not a criminal, for instance, it will kill the criminal.

And if it must kill either a homeless person or a person who is not homeless, it will kill the homeless person.

Bias

Machine learning is racist because the internet is racist

Deep learning algorithms are often trained on data from the web, and their biases are getting hard to ignore
Read More