The Philosophy of Artificial Intelligence
Boden (Margaret), Ed.
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Colour-ConventionsDisclaimerBooks / Papers Citing this Book

BOOK ABSTRACT:

Amazon Book Description

  1. Is "artificial intelligence" a contradiction in terms? Could computers (in principle) be made to model every aspect of the mind, including logic, language, and emotion?
  2. This interdisciplinary collection of classical and contemporary readings provides a clear and comprehensive guide to the many hotly-debated philosophical issues at the heart of artificial intelligence.
  3. The editor includes an informative introduction and reading list.

Back Cover Blurb
  1. This volume contains classical and contemporary essays which explore the philosophical foundations of artificial intelligence and cognitive science.
  2. They illustrate objections raised by critics outside the field, and radical controversies within it.

Note
BOOK COMMENT:
  • OUP Paperback, 1990
  • Oxford Readings in Philosophy



"Boden (Margaret) - Escaping From the Chinese Room"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings

COMMENT: Also in "Boden (Margaret) - Artificial Intelligence in Psychology: Interdisciplinary Essays"



"Boden (Margaret) - The Philosophy of Artificial Intelligence: Introduction"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Churchland (Paul) - Some Reductive Strategies in Cognitive Neurobiology"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings


Philosophers Index Abstract
    A powerful conception of representation and computation--drawn from recent work in the neurosciences--is here outlined. Its virtues are explained and explored in three important areas: sensory representation, sensorimotor coordination, and microphysical implementation. It constitutes a highly general conception of cognitive activity that has significant reductive potential.



"Clark (Andy) - Connectionism, Competence, and Explanation"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Cussins (Adrian) - The Connectionist Construction of Concepts"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Dennett (Daniel) - Cognitive Wheels: The Frame Problem of AI"

Source: Dennett - Brainchildren - Essays on Designing Minds

COMMENT: Also in "Boden (Margaret), Ed. - The Philosophy of Artificial Intelligence"



"Dreyfus (Hubert L.) & Dreyfus (S.D.) - Making a Mind Versus Modelling the Brain: Artificial Intelligence Back At a Branch-point"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Hayes (Patrick J.) - The Naïve Physics Manifesto"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Hinton (Geoffrey E.), McClelland (James) & Rumelhart (David) - Distributed Representations"

Source: Rumelhart, McClelland, Etc - Parallel Distributed Processing, Vol 1

COMMENT: Also in "Boden (Margaret), Ed. - The Philosophy of Artificial Intelligence"



"Marr (David) - Artificial Intelligence: A Personal View"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"McCulloch (Warren S.) & Pitts (Walter H.) - A Logical Calculus of the Ideas Immanent in Nervous Activity"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"McDermott (Drew) - A Critique of Pure Reason"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Newell (Allen) & Simon (Herbert) - Computer Science as Empirical Enquiry: Symbols and Search"

Source: Haugeland - Mind Design II

COMMENT: Also in "Boden (Margaret), Ed. - The Philosophy of Artificial Intelligence"



"Searle (John) - Minds, Brains, and Programs"

Source: Rosenthal - The Nature of Mind
Write-up Note1

Philosophers Index Abstract
  1. I distinguish between strong and weak artificial intelligence (AI).
  2. According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the problems are psychological theories.
  3. I argue that strong AI must be false, since a human agent could instantiate the program and still not have the appropriate mental states.
  4. I examine some arguments against this claim, and I explore some consequences of the fact that human and animal brains are the causal bases of existing phenomena.

BBS-Online
  • This article can be viewed as an attempt to explore the consequences of two propositions.
    1. Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality.
    2. Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim
  • The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.
  • These two propositions have the following consequences
    1. The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2.
    2. Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1.
    3. Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.

Another Abstract
  1. "Could a machine think?"
  2. On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains.
  3. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.


COMMENT:



"Sloman (Aaron) - Motives, Mechanisms, and Emotions"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Turing (Alan) - Computing Machinery and Intelligence"

Source: Mind, Vol. 59, No. 236 (Oct., 1950), pp. 433-460


Philosophers Index Abstract
  1. In this article the author considers the question "can machines think?"
  2. The import of the discussion is on "imitation intelligence" as the author proposes that the best strategy for a machine to have is one that tries to provide answers that would naturally be given by man.
    → (Staff)

Sections
  1. The Imitation Game
  2. Critique of the New Problem
  3. The Machines concerned in the Game
  4. Digital Computers
  5. Universality of Digital Computers
  6. Contrary Views on the Main Question
    1. The Theological Objection
    2. The 'Heads in the Sand' Objection
    3. The Mathematical Objection
    4. The Argument from Consciousness
    5. Arguments from Various Disabilities
    6. Lady Lovelace's Objection
    7. Argument from Continuity in the Nervous System
    8. The Argument from Informality of Behaviour
    9. The Argument from Extra-Sensory Perception
  7. Learning Machines

Author’s Introduction – The Imitation Game
  1. I propose to consider the question, 'Can machines think? This should begin with definitions of the meaning of the terms 'machine' and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ' Can machines think ?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
  2. The new form of the problem can be described in terms of a game which we call the 'imitation game'. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either 'X is A and Y is B' or 'X is B and Y is A'. The interrogator is allowed to put questions to A and B thus:
      C: Will X please tell me the length of his or her hair?
    Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification.
  3. His answer might therefore be 'My hair is shingled, and the longest strands are about nine inches long.'
  4. In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as 'I am the woman, don't listen to him!' to her answers, but it will avail nothing as the man can make similar remarks.
  5. We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman ? These questions replace our original, 'Can machines think?


COMMENT:




In-Page Footnotes ("Turing (Alan) - Computing Machinery and Intelligence")

Footnote 1:
  • Sections 1, 2 and 6 are given in full, together with the first half of Section 3 and a late paragraph from Section 5 appended thereto.
  • Sections 4 and 7 are entirely omitted.
  • It is not made clear to the reader that this is the case.


Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2019
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - June 2019. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page