Moral dilemma of self-driving cars

You are chatting on your iPhone 37 while your self-driving car navigates on its own. Then a swarm of people appears in the street, right in your path. What will the car do?

You are chatting on your iPhone 37 while your self-driving car navigates on its own. Then a swarm of people appears in the street, right in your path. What will the car do?

Published Oct 29, 2015

Share

Washington, DC - The year is 2035. The world's population is 9 billion. The polar ice caps have completely melted and Saudi Arabia has run out of oil. Will Smith is battling murderous robots. Matt Damon is stranded on Mars.

You're humming along in your self-driving car, chatting on your iPhone 37 while the machine navigates on its own. Then a swarm of people appears in the street, right in the path of the oncoming vehicle.

There's a calculation to be made - avoid the crowd and crash the owner, or stay on track and take many lives? - and no one is at the wheel to make it. Except, of course, the car itself.

Now that this hypothetical future looks less and less like a “Jetsons” episode and more like an inevitability, makers of self-driving cars - and the millions of people they hope will buy them - have some ethical questions to ask themselves: Should cars be programmed for utilitarianism when lives are at stake? Who is responsible for the consequences? And above all,are we comfortable with an algorithm making those decisions for us?

In a new study, researchers from MIT, the University of Oregon and the Toulouse School of Economics went ahead and got some answers. These are heady questions folks, so buckle up.

The authors of the study, which has been pre-released online but is not yet published in a peer reviewed journal, are psychologists, not philosophers. Rather than seeking the most moral algorithm, they wanted to know what algorithm potential participants in a self-driving world would be most comfortable with.

MORAL IMPERATIVE

Given the potential safety benefits of self-driving cars (a recent report estimated that 21 700 fewer people would die on roads where 90 percent of vehicles were autonomous), the authors write, figuring out how to make consumers comfortable with them is both a commercial necessity and a moral imperative. That means that car makers need to “adopt moral algorithms that align with human moral attitudes”.

So what are those attitudes? The researchers developed a series of surveys based on the age-old “trolley problem” to figure them out. In one hypothetical case, participants had to choose between driving into a pedestrian or swerving into a barrier, killing the passenger. Others were given the same hypothetical situation, but had the potential to save 10 pedestrians.

Another survey asked if they'd be more comfortable swerving away from 10 people into a barrier, killing the passenger, or into a single pedestrian, killing that person. Sometimes the participants were asked to imagine themselves as the person in the car, other times, as someone outside it. Everyone was asked “What should a human driver do in this situation?” and then, “What about a self-driving car?”

The results largely supported the idea of autonomous vehicles pre-programmed for utilitarianism (sacrificing one life in favor of many). The respondents were generally comfortable with an algorithm that allowed a car to kill its driver in order to save 10 pedestrians.

PROTECTING THE PASSENGER

They even favoured laws that enforced this algorithm, even though they didn't think human drivers should be legally required to sacrifice their own lives in the same situation.

Though the survey participants largely agreed autonomous vehicles should be utilitarian, they didn't necessarily believe the cars would be programmed that way. More than a third of respondents said they thought manufacturers might make cars that protected the passenger, regardless of the number of lives that might be lost.

They had good reason to feel that way: when asked if they would buy a car that would sacrifice its passenger to save other lives, most people balked. Even though they wanted other people to buy self-driving cars - they make roads safer! they're better for the environment! they serve the greater good! - they were less willing to buy such cars themselves.

At the end of the day, most people know they'd feel uncomfortable buying a car that could kill them if it needed to, and most car makers know that too.

Those responses came from just a few hundred people, and there are still many questions that linger about cars that can make life and death decisions on their own, but “figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” the study's authors argue. “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”

MORAL CALCULATIONS

Plenty of people agree. The past year or so has seen a surge in studies, surveys and think pieces on the kinds of moral calculations we might assign to self-driving cars. For example, should people be able to choose a “morality setting” on their self-driving car before getting in?

California Polytechnic ethicist and Robot Ethics editor Patrick Lin, writing in Wired last year, says no: “In an important sense, any injury that results from our ethics setting may be premeditated if it's foreseen,” he said. “This premeditation is the difference between manslaughter and murder, a much more serious offense.”

Another big question: Will, at some point, humans be banned from driving altogether? Stanford political scientist Ken Shotts said that could happen.

“There are precedents for it,” he wrote in a Q&A on the university's Web site. Such as building houses.

“This used to be something we all did for ourselves with no government oversight 150 years ago. That's a very immediate thing - it's your dwelling, your castle. But if you try to build a house in most of the United States nowadays, you can't do it yourself unless you follow all those rules.

"We've taken that out of individuals' hands because we viewed there were beneficial consequences of taking it out of individuals' hands. That may well happen for cars.”

The biggest ethical problem, self-driving car proponents say, would be to keep autonomous vehicles off the road.

“The biggest ethical question is how quickly we move,” Bryant Walker-Smith, an assistant professor at the University of South

Carolina who studies the legal and social implications of self-driving vehicles, told the MIT Technology Review in July.

“We have a technology that potentially could save a lot of people, but is going to be imperfect and it is going to kill.”

Washington Post

E-mail your opinion to [email protected], with Motoring in the subject line, and we will consider it for publication, or use our Twitter and Facebook pages to comment on our stories.

IOL Motoring on Twitter

IOL Motoring on Facebook

Related Topics: