The Moral Machine experiment
Awad (Edmond), Etc.
Source: Nature volume 563, pages 59–64 (1st November 2018)
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this PaperColour-ConventionsDisclaimer

Authors' Abstract

  1. With the rapid development of artificial intelligence1 have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour.
  2. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories.
  3. Here we describe the results of this experiment.
    1. First, we summarize global moral preferences.
    2. Second, we document individual variations in preferences, based on respondents’ demographics.
    3. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries.
    4. Fourth, we show that these differences correlate with modern institutions and deep cultural traits.
  4. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics.
  5. All data used in this article are publicly available.


For the full text, follow this link (Local website only): PDF File2.

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2021
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)

© Theo Todman, June 2007 - June 2021. Please address any comments on this page to File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page