When Hal Kills, Who's to Blame? Computer Ethics
Dennett (Daniel)
Source: D. Stork, ed., Hal's Legacy: 2001's Computer as Dream and Reality, MIT Press 1997, pp 351-365
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this PaperColour-ConventionsDisclaimer

Author’s Introduction

  1. The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated December 9, 1981, from the Philadelphia Inquirer – not the National Enquirer – with the headline "Robot killed repairman, Japan reports".
  2. The story was an anticlimax. At the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, which crushed him to death. The repairman had failed to follow instructions for shutting down the arm before he entered the workspace. Why, indeed, was this industrial accident in Japan reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that-in the public imagination at least-this was no ordinary machine. This was a robot, a machine that might have a mind, might have evil intentions, might be capable, not just of homicide, but of murder. Anglo-American jurisprudence speaks of mens rea – p literally, the guilty mind:
      To have performed a legally prohibited action, such as killing another human being; one must have done so with a culpable state of mind, or mens rea. Such culpable mental states are of three kinds: they are either motivational states of purpose, cognitive states of belief, or the non-mental state of negligence. (Cambridge Dictionary of Philosophy, 1995, p. 482)
  3. The legal concept has no requirement that the agent be capable of feeling guilt or remorse or any other emotion; so-called cold-blooded murderers are not in the slightest degree exculpated by their flat affective state. Star Trek's Spock would fully satisfy the mens rea requirement in spite of his fabled lack of emotions. Drab, colorless – but oh so effective – "motivational states of purpose" and "cognitive states of belief" are enough to get the fictional Spock through the day quite handily. And they are well-established features of many existing computer programs.
  4. When IBM's computer Deep Blue beat world chess champion Garry Kasparov in the first game of their 1996 championship match, it did so by discovering and executing, with exquisite timing, a withering attack, the purposes of which were all too evident in retrospect to Kasparov and his handlers. It was Deep Blue's sensitivity to those purposes and a cognitive capacity to recognize and exploit a subtle flaw in Kasparov's game that explain Deep Blue's success. Murray Campbell, Feng-hsiung Hsu, and the other designers of Deep Blue, didn't beat Kasparov; Deep Blue did. Neither Campbell nor Hsu discovered the winning sequence of moves; Deep Blue did. At one point, while Kasparov was mounting a ferocious attack on Deep Blue's king, nobody but Deep Blue figured out that it had the time and security it needed to knock off a pesky pawn of Kasparov's that was out of the action but almost invisibly vulnerable. Campbell, like the human grandmasters watching the game, would never have dared consider such a calm mopping-up operation under pressure.
  5. Deep Blue, like many other computers equipped with artificial intelligence1 (AI) programs, is what I call an intentional system: its behavior is predictable and explainable if we attribute to it beliefs and desires-"cognitive states" and "motivational states"-and the rationality required to figure out what it ought to do in the light of those beliefs and desires. Are these skeletal versions of human beliefs and desires sufficient to meet the mens rea requirement of legal culpability? Not quite, but, if we restrict our gaze to the limited world of the chess board, it is hard to see what is missing. Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it. But we also assign responsibility to agents in order to praise or honor the appropriate agent. Who or what, then, deserves the credit for beating Kasparov? Deep Blue is clearly the best candidate. Yes, we may join in congratulating Campbell, Hsu and the IBM team on the success of their handiwork; but in the same spirit we might congratulate Kasparov's teachers, handlers, and even his parents. And, no matter how assiduously they may have trained him, drumming into his head the importance of one strategic principle or another, they didn't beat Deep Blue in the series: Kasparov did.
  6. Deep Blue is the best candidate for the role of responsible opponent of Kasparov, but this is not good enough, surely, for full moral responsibility.


See Link.

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2021
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)

© Theo Todman, June 2007 - Jan 2021. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page