A moral map for AI cars
Maxmen (Amy)
Source: Nature volume 562, pages 469-470 (25th October 2018)
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperColour-ConventionsDisclaimer


Full Text1

  1. A moral map for AI cars Survey reveals global variations in ethical rules of the road for autonomous vehicles.
  2. When a driver slams on the brakes to avoid hitting a pedestrian crossing the road illegally, she is making a moral decision that shifts risk from the pedestrian to the people in the car. Self-driving cars might soon have to make such ethical judgements on their own — but settling on a universal moral code for the vehicles could be a thorny task, suggests a survey of 2.3 million people around the world.
  3. The largest-ever survey of machine ethics, published this week in Nature, finds that many of the moral principles that guide a driver’s decisions vary by country. For example, in a scenario in which some combination of pedestrians and passengers will die in a collision, people from relatively prosperous countries with strong institutions, such as law enforcement, were less likely to spare a pedestrian who stepped into traffic illegally.
  4. “People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” says study co-author Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge.
  5. The survey, called the Moral Machine, laid out 13 scenarios in which someone’s death was inevitable. Respondents were asked to choose who to spare in situations that involved a mix of variables: young or old, rich or poor, more people or fewer.
  6. People rarely encounter such stark moral dilemmas, and some critics ask whether the scenarios posed in the quiz are relevant to the ethical questions surrounding driverless cars. But the study’s authors say that the scenarios stand in for the subtle moral decisions that drivers make every day. The findings reveal cultural nuances that governments and makers of self-driving cars must take into account if they want the vehicles to gain public acceptance, they say.
  7. “It’s a remarkable paper,” says Nicholas Christakis, a social scientist at Yale University in New Haven, Connecticut. The debate about whether ethics are universal or vary between cultures is an old one, he says, and now the “twenty-first-century problem” of how to program self-driving cars has reinvigorated it.
  8. Some of the world’s biggest tech companies — including Google, Uber and Tesla — and carmakers now have self-driving-car programmes. Many of these companies argue that the vehicles could improve road safety and ease traffic, but social scientists say the cars raise complex ethical issues.
  9. In 2016, Rahwan’s team stumbled on a paradox about self-driving cars: in surveys, people say they want an autonomous vehicle to protect pedestrians, even if it means sacrificing its passengers — but also that they wouldn’t buy selfdriving vehicles programmed to act in this way.
  10. Curious to see whether the prospect of selfdriving cars might raise other ethical conundrums, Rahwan gathered psychologists, anthropologists and economists to create the online Moral Machine quiz. Within 18 months, it had recorded 40 million decisions made by people from 233 countries and territories.
  11. No matter their age, gender or country of residence, most people spared humans over pets, and groups of people over individuals. These responses are in line with rules proposed in what might be the only governmental guidance on self-driving cars: a 2017 report by the German Ethics Commission on Automated and Connected Driving.
  12. But agreement ends there. When the authors analysed answers from people in the 130 countries with at least 100 respondents, they found that the nations could be divided into three groups (see ‘Moral compass’). One contains North America and several European and other nations where Christianity has historically been the dominant religion; another includes countries such as Japan, Indonesia and Pakistan, which have strong Confucian or Islamic traditions. A third group consists of Central and South America, as well as France and former French colonies. The first group showed a stronger preference for sacrificing older lives to save younger ones than did the second group, for example.
  13. Test versions of autonomous cars are cruising through several US cities. By 2021, at least five manufacturers hope to have self-driving cars and trucks in wide use.
  14. Bryant Walker Smith, a law professor at the University of South Carolina in Columbia, says that the study is unrealistic because there are few instances in real life in which a vehicle would face a choice between striking two different types of person. “I might as well worry about how automated cars will deal with asteroid strikes,” he says.
  15. But Barbara Wege, who heads a group focused on autonomous-vehicle ethics at the car manufacturer Audi in Ingolstadt, Germany, says that such studies are valuable. Wege argues that self-driving cars would cause fewer accidents, proportionally, than human drivers do each year — but that events involving robots might receive more attention.
  16. Surveys such as the Moral Machine can help to prompt public discussions about inevitable accidents, and so might foster trust. “We need to come up with a social consensus,” she says, “about which risks we are willing to take.”

Comment:



In-Page Footnotes

Footnote 1: One helpful diagram – ‘The Moral Compass’ – not reproduced.


Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2019
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - June 2019. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page