Who would want to drive a killer car?

By Henrik Andersen

As self-driving cars become more sophisticated, they are entrusted with ever greater responsibilities. But what kinds of decisions will you trust your car to make for you? Should your car sacrifice animals and unhealthy people to save children and doctors?

Imagine a trolley in full speed down a track. Ahead are five workers tied to the track. As a bystander, you are in control of a lever that could steer the trolley off in another direction, however, on the alternate path there is another person. You have two options: Pull the lever, or do nothing. Would you sacrifice one life to save five? This ethical dilemma is called The Trolley Problem. It It was introduced by Philippa Foot and Judith Jarvis Thomson in the 1960’s, and has until now been only a theoretical problem that occupied philosophers. But with the advent of self driving vehicles, it is becoming a problem for car manufacturers and programmers.

Advanced cars

Cars are becoming increasingly sophisticated. Today, most newly produced cars are equipped with devices that help the driver to stay on the road, park the vehicle, and check for blind spots. These advances are just the beginning. With autonomous vehicles (or “AVs” for short), the car will itself decide the appropriate speed and distance to the car in front. The whole driving activity will be under the control of a computer program. The computer program has to continuously make choices, including ones similar to the trolley problem. AVs have to make a multitude of decisions simultaneously. They have to decide the correct speed of the vehicle, the proper distance to other vehicles, calculate weather conditions, and adjust when driving in close proximity of schools. The use of Big Data suggests that there is a lot of learning potential from statistics, which these vehicles can use to learn more about proper speed adjustment and areas prone to accidents. This might result in operating systems that clearly outperform the standard human driver. According to a study by the consulting agency McKinsey, AVs can reduce traffic accidents by 90 percent. Most of today’s traffic incidents are caused by human errors, typically distracted or drunk drivers. AVs would drastically reduce the risk that human errors bring to traffic.

Moral programming

The million-dollar question is, how should these AVs be programmed? There is tension between the underlying ethical programs of the AVs. The vehicle can be programmed to do as little harm as possible to pedestrians, or it can prioritize the safety of the passengers in the car. If the brakes should fail, the car will either steer off the road or drive straight forward, possibly killing pedestrians.

With more complex algorithms, the car might also calculate the relative value of different pedestrians. One program might value old people less than children, because they have fewer years to live. Another program might hold opposite values, because old people deserve to be respected and they have contributed more to society than children. The car might run over some pedestrians if the group consists of unemployed people and criminals as opposed to entrepreneurs and doctors. That is, if the ethical programming of the vehicle is based on the value individuals bring to society, we can expect vehicles that run over criminals. The implementation of AVs now make these hypothetical dilemmas concrete problems that both manufacturers, consumers and policy makers have to take a stance on. Humans will still be controlling activities, but the decisions will be done by algorithms. Huge responsibilities are being placed on programmers and computer scientists.

Market mechanisms

From a consumer perspective, there are clearly some moral programs that will be less prone to commercial success than others. Very few customers would be willing to use a car that prioritizes pedestrians over the passengers in the vehicle. Imagine yourself and four friends sitting in an AV. As you close in on a pedestrian crossing, the car’s brakes fail. On the crossing there are four children and an adult. Would you be satisfied with your car steering off the road resulting in certain death for you and your friends? A study by Science from 2016 shows that while many people would agree with such logic, they would not themselves buy or drive such a vehicle. Cars with moral programs that prioritise the passengers’ safety are more appealing to consumers than their more altruistic counterparts. It is reasonable to assume that people would buy killer cars rather than self-sacrificing cars.

Arguably, the most difficult problems posed by AVs are legal ones. When there is no driver, who is responsible in the event of an incident? If you send your car to pick up your kid from football practice and the car crashes on the way, are you responsible, being the owner of the car? Is the manufacturer responsible, having produced it? Or is the company that programmed the car responsible? And while AVs may decrease the overall number of traffic accidents, who would want to let a car make those life-or-death decisions for them?