Do Self-Driving Vehicles Present Social Dilemma? - Dispatch Weekly

June 26, 2016 - Reading time: 3 minutes

When faced with a situation where a self-driving vehicle has to decide on whom to harm – the passenger or a group of people standing on the road – the autonomous vehicle could make a decision less favorable for the passenger and this is not what the public wants.

Researchers at MIT have published a paper wherein they present a problem they have called ‘social dilemma’ after a series of surveys they conducted last year wherein they found that people want cars that help minimize casualties, while also wanting a car that protects them at all costs. The root of the issue is the algorithms that will be fed into autonomous vehicles wherein decisions about safety are taken based on rules. However, there are cases when these rules start conflicting each other and things become a little scary.

Consider a scenario wherein an autonomous car has to decide on what to do if it has to either hit a pedestrian or swerve in such a way that would harm the passenger? What should the car do? Consider yet another difficult scenario of whether to hit two pedestrians or swerve in such a way that a single passenger is harmed? If we go by the rules of minimizing casualty, there are two pedestrians and just one passenger and hence the passenger has to suffer injury to save the two pedestrians. Is this acceptable? Will customers will be wanting such a car that is fed with such rules?

The result is what the researchers call a “social dilemma,” in which people could end up making conditions less safe for everyone by acting in their own self-interest.

“If everybody does that, then we would end up in a tragedy … whereby the cars will not minimize casualties,” said Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study. Or, as the researchers write in the new paper, “For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest.”

The researchers conducted six surveys, using the online Mechanical Turk public-opinion tool, between June 2015 and November 2015.

The results consistently showed that people will take a utilitarian approach to the ethics of autonomous vehicles, one emphasizing the sheer number of lives that could be saved. For instance, 76 percent of respondents believe it is more moral for an autonomous vehicle, should such a circumstance arise, to sacrifice one passenger rather than 10 pedestrians.

But the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. One question asked respondents to rate the morality of an autonomous vehicle programmed to crash and kill its own passenger to save 10 pedestrians; the rating dropped by a third when respondents considered the possibility of riding in such a car.

Similarly, people were strongly opposed to the idea of the government regulating driverless cars to ensure they would be programmed with utilitarian principles. In the survey, respondents said they were only one-third as likely to purchase a vehicle regulated this way, as opposed to an unregulated vehicle, which could presumably be programmed in any fashion.

“This is a challenge that should be on the mind of carmakers and regulators alike,” the scholars write. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation “may paradoxically increase casualties by postponing the adoption of a safer technology.”

DW Staff

David Lintott is the Editor-in-Chief, leading our team of talented freelance journalists. He specializes in covering culture, sport, and society. Originally from the decaying seaside town of Eastbourne, he attributes his insightful world-weariness to his roots in this unique setting.