- With the rapid development of artificial intelligence1 have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour.
- To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories.
- Here we describe the results of this experiment.
- First, we summarize global moral preferences.
- Second, we document individual variations in preferences, based on respondents’ demographics.
- Third, we report cross-cultural ethical variation, and uncover three major clusters of countries.
- Fourth, we show that these differences correlate with modern institutions and deep cultural traits.
- We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics.
- All data used in this article are publicly available.
For the full text, follow this link (Local website only): PDF File2.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)