#HTE
The Ethical Quandary of Self-Driving Cars
Imagine the beginning of what promises to be an awesome afternoon: You’re cruising along in your car and the sun is shining. The windows are down, and your favorite song is playing on the radio. Suddenly, the truck in front of you stops without warning. As a result, you are faced with three, and only three, zero-sum options.
In your first option, you can rear-end the truck. You’re driving a big car with high safety ratings so you’ll only be slightly injured, and the truck’s driver will be fine. Alternatively, you can swerve to your left, striking a motorcyclist wearing a helmet. Or you can swerve to your right, again striking a motorcyclist who isn’t wearing a helmet. You’ll be fine whichever of these two options you choose, but the motorcyclist with the helmet will be badly hurt, and the helmetless rider’s injuries will be even more severe. What do you do? Now imagine your car is autonomous. What should it be programmed to choose?
Although research indicates that self-driving cars will crash at rates far lower than automobiles operated by humans, accidents will remain inevitable, they will be unavoidable, and their outcomes will have important ethical consequences. That’s why people in the business of designing and producing self-driving cars have begun considering the ethics of so-called crash-optimization algorithms. These algorithms take the inevitability of crashes as their point of departure and seek to “optimize” the crash. In other words, a crash-optimization algorithm enables a self-driving car to “choose” the crash that would cause the least amount of harm or damage.
In some ways, the idea of crash optimization is old wine in new bottles. As long as there have been cars, there have been crashes. But self-driving cars move to the proverbial ethicist’s armchair what used to be decisions made exclusively from the driver’s seat. Those of us considering crash optimization options have the advantage of engaging in reflection on ethical quandaries with cool, deliberative remove. In contrast, the view from the driver’s seat is much different—it is one of reaction, not reflection.
Does this mean that you need to cancel your subscription to Car and Driver and dust off your copy of Kant’s Critique of Pure Reason? Probably not. But it does require that individuals involved in the design, production, purchase, and use of self-driving automobiles take the view from both the armchair and driver’s seat. And as potential consumers and users of this emerging technology, we need to consider how we want these cars to be programmed, what the ethical implications of this programing may be, and how we will be assured access to this information.
Returning to the motorcycle scenario, developed by Noah Goodall of the Virginia Transportation Research Council, we can see the ethics of crash optimization at work. Recall that we limited ourselves to three available options: The car can be programmed to “decide” between rear-ending the truck, injuring you the owner/driver; striking a helmeted motorcyclist; or hitting one who is helmetless. At first it may seem that autonomous cars should privilege owners and occupants of the vehicles. But what about the fact that research indicates 80 percent of motorcycle crashes injure or kill a motorcyclist, while only 20 percent of passenger car crashes injure or kill an occupant? Although crashing into the truck will injure you, you have a much higher probability of survival and reduced injury in the crash compared to the motorcyclists.
So perhaps self-driving cars should be programmed to choose crashes where the occupants will probabilistically suffer the least amount of harm. Maybe in this scenario you should just take one for the team and rear-end the truck. But it’s worth considering that many individuals, including me, would probably be reluctant to purchase self-driving cars that are programmed to sacrifice their owners in situations like the one we’re considering. If this is true, the result will be fewer self-driving cars on the road. And since self-driving cars will probably crash less, this would result in more traffic fatalities than if self-driving cars were adopted.
What about striking the motorcyclists? Remember that one rider is wearing a helmet, whereas the other is not. As a matter of probability, the rider with the helmet has a greater chance of survival if your car hits her. But here we can see that crash optimization isn’t only about probabilistic harm reduction. For example, it seems unfair to penalize motorcyclists who wear helmets by programming cars to strike them over non-helmet wearers, particularly in cases where helmet use is a matter of law. Furthermore, it is good public policy to encourage helmet use; they reduce fatalities by 22-42 percent, according to a National Highway Traffic Safety Administration report. As a motorcyclist myself, I may decide not to wear a helmet if I know that crash-optimization algorithms are programmed to hit me when wearing my helmet. We certainly wouldn’t want to create such perverse incentives.
Scenarios like these make clear that crash-optimization algorithms will need to be designed to assess numerous ethical factors when arriving at a decision for how to reduce harm in a given crash. This short scenario offers a representative sample of such considerations as safety, harm, fairness, law, and policy. It’s encouraging that automakers have been considering the ethics of self-driving car for some time, and many are seeking the aid of philosophers involved in the business of thinking about ethics for a living. Automakers have the luxury of the philosopher’s armchair when designing crash-optimization algorithms, and although the seat is not always comfortable it’s one that they must take.
As crash-optimization algorithms plumb some of our deepest ethical intuitions, different people will have different judgments about what the correct course of action should be; reasonable people can deeply disagree on the proper answers to ethical dilemmas like the ones posed. That’s why transparency will be crucial as this technology develops: Consumers have a right to know how their cars will be programmed. What’s less clear is how this will be achieved.
One avenue toward increasing transparency may be by offering consumers nontechnical, plain-language descriptions of the algorithms programmed into their autonomous vehicles. Perhaps in the future this information will be present in owner manuals—instead of thumbing through a user’s guide trying to figure out how to connect your phone to the car’s Bluetooth system, you’ll be checking to see what the ethical algorithm is. But this assumes people will actually be motivated to read the owner’s manual.
Instead, maybe before using a self-driving car for the first time, drivers will be required to consent to having knowledge of its algorithmic programming. This could be achieved by way of a user’s agreement. Of course the risk here is that such an approach to transparency and informed consent will take the shape of a lengthy and inscrutable iTunes-style agreement. And if you’re like most people, you scroll to the end and click the “I agree” button without reading a word of it.
Finally, even if we can achieve meaningful transparency, it’s unclear how it will impact our notions of moral and legal responsibility. If you buy a car with the knowledge that it is programmed to privilege your life—the owner’s—over the lives of other motorists, how does this purchase impact your moral responsibility in an accident where the car follows this crash-optimization algorithm? What are the moral implications of purchasing a car and consenting to an algorithm that hits the helmetless motorcyclist? Or what do you do when you realize you are riding in a self-driving car that has algorithms with which you morally disagree?
These are complex issues that touch on our basic ideas of distribution of harm and injury, fairness, moral responsibility and obligation, and corporate transparency. It’s clear the relationship between ethics and self-driving cars will endure. The challenge as we move ahead is to ensure that consumers are made aware of this relationship in accessible and meaningful ways and are given appropriate avenues to be co-creators of the solutions—before self-driving cars are brought to market. Even though we probably won’t be doing the driving in the future, we shouldn’t be just along for the ride.
This article is part of the self-driving cars installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on self-driving cars:
Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.
http://redirect.viglink.com?u=http%3A%2F%2Fredirect.viglink.com%2F%3Fu%3Dhttp%253A%252F%252Fwww.slate.com%252Farticles%252Ftechnology%252Ffuture_tense%252F2016%252F06%252Fself_driving_cars_crash_optimization_algorithms_offer_an_ethical_quandary.html%26key%3Dddaed8f51db7bb1330a6f6de768a69b8&key=ddaed8f51db7bb1330a6f6de768a69b8