Voice of reason in tricky driverless car ethics

Published Jun 27, 2017

Share

Can a self-driving car be programmed to choose who dies or is injured in an unavoidable crash? Should it be allowed to? Who would be held responsible if such a situation arises? Operator or programmer?

These are just some of the highly contentious questions the motoring world is faced with as cars edge ever closer to fully driverless capabilities, and until they're answered the future of autonomy remains at an impasse.

The German government recently tasked an ethics commission, comprising 14 philosophers, lawyers, theologians, engineers and consumer protection advocates, to draw up the world's first set of ethical guidelines for automated driving. One of the commission's members, Professor Christoph Lütge, has offered up his take on some of autonomy's stickier subjects in this interview:

Q: Professor Lütge, let’s imagine a situation where a collision with a person is inevitable. However, the car could hit either a child or an older person. What decision should the self-driving car make here?

A: Self-driving cars should not make decisions based on a person’s characteristics, whether age, physical condition or sex. Human dignity is inviolable. Which is why vehicles cannot be programmed along the lines of: “If in doubt, hit the man with the walking frame”.

Q: Even though most drivers would probably make that decision?

A: The decision is not being made by a human being with a moral framework and the capacity to make a choice. Instead, we are looking at how a system can be programmed to deal with future scenarios. Imagine this situation: A car is on a narrow path with a cliff face on the right and a sharp drop to the left. Suddenly, a child appears up ahead and the car cannot brake in time. Should the car drive into the child or off the road and into the abyss? Programmers cannot make the decision to sacrifice the driver. The only option is to brake as effectively as possible.

Q: But shouldn’t the system be able to calculate the number of victims and base its decisions on that?

A: This was a topic of much debate in the commission but we came to the conclusion that one can justify a reduction in the number of casualties. 

Q: Doesn’t this contradict the ruling made by the German Federal Constitutional Court? The Court ruled that an airplane hijacked by terrorists cannot be shot down, even if it is heading towards a target where there is a significantly higher number of people.

A: There is an important ethical difference here: Nobody can decide to bring about the death of an individual. The plane in this scenario contains real people who we can identify. In the case of automated driving, we are talking about general programming to reduce casualties without knowing who the victims are or classifying them beforehand.

Apart from that it’s not just a question of numbers. You have to factor in the severity of the damage. If you are faced with an either/or situation where a car can merely graze several people, then it shouldn’t choose to fatally injure one individual. 

Q: But what about the thousands of scenarios between these extremes? One manufacturer will choose one outcome while another makes opts for a different one.

A: I believe there should a neutral body that manages a catalogue of scenarios with universally accepted standards. This organisation could also test the technologies before manufacturers take them to market.

Q: Is it ethically acceptable at all to shift the responsibilities that we as humans bear over to technology?

A: This responsibility is not being shifted to technology per se but to the manufacturers and operators of the technology. We want regulations that clearly set out when the driver is in control and when technology is in control – and who is liable. Furthermore, we don’t want a situation where the system suddenly hands over control to the driver for whatever reason. And as responsibility can change between the car and the driver, every journey should be documented in a black box. International standards have to be developed for these scenarios.

Q: What if I don’t want to hand over responsibility?

A: In the commission, we were told by engineers that driving becomes less safe when humans intervene. However, humans have a basic right not to be obliged to submit to technology. In other words, it must be possible to deactivate automated control.

There are still many cases where the human response is better, anyway.

It is only ethically acceptable to allow automated driving if it will cause less damage than a human being behind the wheel. We assume that this will be possible in the near future – to such an extent that it will lead to a significant ethical improvement in driving. Our aim is to contribute to this development through these guidelines. 

Star Motoring

Like us on Facebook

Follow us on Twitter

Related Topics: