How does a machine make an ethical choice?

Sooner or later a driverless car will have to make a decision in a no-win situation.

Sooner or later a driverless car will have to make a decision in a no-win situation.

Published Jun 24, 2016

Share

Cambridge, Massachusetts - If it has to make a choice, will your autonomous car kill you or pedestrians on the street?

The looming arrival of self-driving vehicles is likely to vastly reduce traffic fatalities, but also poses difficult moral dilemmas, according to a study published on Thursday in the journal Science. Researchers say autonomous driving systems will require programmers to develop algorithms to make critical decisions that are based more on ethics than technology .

“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” said the study by Jean-Francois Bonnefon of the Toulouse School of Economics, Azim Shariff of the University of Oregon and Iyad Rahwan of the Massachusetts Institute of Technology.

“For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest - let alone account for different cultures with various moral attitudes regarding life-life tradeoffs - but public opinion and social pressure may very well shift as this conversation progresses.”

The researchers said adoption of autonomous vehicles offered many social benefits such as reducing air pollution and eliminating up to 90 percent of traffic accidents.

“Not all crashes will be avoided, though, and some crashes will require autonomous vehicles to make difficult ethical decisions in cases that involve unavoidable harm,” the researchers said in the study.

“For example, the vehicle may avoid harming several pedestrians by swerving and sacrificing a passerby, or it may be faced with the choice of sacrificing its own passenger to save one or more pedestrians.”

Social benefits

These dilemmas are “low-probability events” but programmers “must still include decision rules about what to do in such hypothetical situations,” the study said.

The researchers said they were keen to see adoption of self-driving technology because of major social benefits.

“A lot of people will protest that they love driving,” Shariff said, “but us having to drive our own cars is responsible for a tremendous amount of misery in the world.”

The programming decisions must take into account mixed and sometimes conflicting public attitudes.

In a survey conducted by the researchers, 76 percent of participants said that it would be more ethical for self-driving cars to sacrifice one passenger rather than kill 10 pedestrians.

But just 23 percent said it would be preferable to sacrifice their passenger when only one pedestrian could be saved. And only 19 percent they would buy a self-driving car if it meant a family member might be sacrificed for the greater good.

The responses show an apparent contradiction: “People want to live a world in which everybody owns driverless cars that minimise casualties, but they want their own car to protect them at all costs,” said Rahwan.

“But if everybody thinks this way then we end up in a world in which every car will look after its own passenger's safety or its own safety and society as a whole is washed off.”

Clear guidelines

One solution, the researchers said, may be for regulations that set clear guidelines for when a vehicle must prioritize the life of a passenger or others, but it's not clear if the public will accept this.

Rahwan said: “If we try to use regulation to solve the public good problem of driverless car programming we would be discouraging people from buying those cars.

“And that would delay the adoption of the new technology that would eliminate the majority of accidents.”

In a commentary in Science, Joshua Greene of Harvard University's Centre for Brain Science said the research showed the road ahead remained unclear.

“Life-and-death trade-offs are unpleasant,” he wrote. “No matter which ethical principles autonomous vehicles adopt, they will be open to compelling criticisms, giving manufacturers little incentive to publicise their operating principles.

“The problem, it seems, is more philosophical than technical. Before we can put our values into machines, we have to figure out how to make our values clear and consistent. For 21st century moral philosophers, this may be where the rubber meets the road.”

AFP

Related Topics: