Authors Citing this Paper: Madary (Michael)
- If you’ve ever dabbled in role-playing games – either online or in old-fashioned meatspace – you’ll know how easy it is to get attached to your avatar. It really hurts when your character gets mashed by a troll, felled by a dragon or slain by a warlock. The American sociologist (and enthusiastic gamer) William Sims Bainbridge has taken this relationship a step further, creating virtual representations for at least 17 deceased family members. In a 2013 essay ("Bainbridge (William Sims) - Transavatars") about online avatars, he foresees a time when we’ll be able to oﬄoad parts of our identity onto artiﬁcially intelligent simulations of ourselves that could function independently of us, and even persist after we die.
- What sorts of responsibilities would we owe to these simulated humans? However else we might feel about violent computer games1, no one seriously thinks it’s homicide when you blast a virtual assailant to oblivion. Yet it’s no longer absurd to imagine that simulated people might one day exist, and be possessed of some measure of autonomy and consciousness. Many philosophers believe2 that minds like ours don’t have to be hosted by webs of neurons in our brains, but could exist in many different sorts of material systems. If they’re right, there’s no obvious reason why sufficiently powerful computers couldn’t hold consciousness in their circuits.
- Today, moral philosophers ponder the ethics of shaping human populations, with questions such as: what is the worth of a human life? What kind of lives should we strive to build? How much weight should we attach to the value of human diversity? But when it comes to thinking through the ethics of how to treat simulated entities, it’s not clear that we should rely on the intuitions we’ve developed in our flesh-and-blood world. We feel in our bones that there’s something wrong with killing a dog, and perhaps even a fly. But does it feel quite the same to shut down a simulation of a fly’s brain – or a human’s? When ‘life’ takes on new digital forms, our own experience might not serve as a reliable moral guide.
- Adrian Kent, a theoretical physicist at the University of Cambridge, has started to explore3 this lacuna in moral reasoning. Suppose we become capable of emulating a human consciousness on a computer very cheaply, he suggested in a recent paper4. We’d want to give this virtual being a rich and rewarding environment to interact with – a life worth living. Perhaps we might even do this for real people by scanning their brain in intricate detail and reproducing it computationally. You could imagine such a technology being used to ‘save’ people from terminal illness; some transhumanists5 today see it as a route to immortal consciousness.
- This is an important paper, but makes many implausible claims – or provides uncritical analyses that I need to engage closely with.
- It seems to rely on Utilitarianism, but is also initially critical of it.
- Papers cited, outside those already noted in the Introduction, include the following:-
- This blog – Hanson - Two Types of Envy – is mentioned, but it’s introduced mainly as a bit of ad hominem feminist complaint against Robin Hanson. Hanson’s article – and much of the supporting commentary – is rather noxious, but irrelevant to this paper, except that …
- There’s a feminist critique of the whole transhumanist project as being too androcentric, and separating the mind from the body. Indeed, I came across this paper in a disparaging reference in "Davies (Sally) - Women’s minds matter".
- To be continued …
- Sub-title: "Say you could make a thousand digital replicas of yourself – should you? What happens when you want to get rid of them?"
- For the full text, see Aeon: Ball - Sim ethics.
Footnote 1: Footnote 2: Footnote 3: Footnote 4:
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)