The Philosophy of Artificial Intelligence
Boden (Margaret), Ed.
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Colour-ConventionsDisclaimerBooks / Papers Citing this Book

BOOK ABSTRACT:

Amazon Book Description

  1. Is "artificial intelligence" a contradiction in terms? Could computers (in principle) be made to model every aspect of the mind, including logic, language, and emotion?
  2. This interdisciplinary collection of classical and contemporary readings provides a clear and comprehensive guide to the many hotly-debated philosophical issues at the heart of artificial intelligence.
  3. The editor includes an informative introduction and reading list.

Back Cover Blurb
  1. This volume contains classical and contemporary essays which explore the philosophical foundations of artificial intelligence and cognitive science.
  2. They illustrate objections raised by critics outside the field, and radical controversies within it.

Note
BOOK COMMENT:
  • OUP Paperback, 1990
  • Oxford Readings in Philosophy



"Boden (Margaret) - Escaping From the Chinese Room"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings

COMMENT: Also in "Boden (Margaret) - Artificial Intelligence in Psychology: Interdisciplinary Essays"



"Boden (Margaret) - The Philosophy of Artificial Intelligence: Introduction"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Churchland (Paul) - Some Reductive Strategies in Cognitive Neurobiology"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings


Philosophers Index Abstract
    A powerful conception of representation and computation--drawn from recent work in the neurosciences--is here outlined. Its virtues are explained and explored in three important areas: sensory representation, sensorimotor coordination, and microphysical implementation. It constitutes a highly general conception of cognitive activity that has significant reductive potential.



"Clark (Andy) - Connectionism, Competence, and Explanation"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Cussins (Adrian) - The Connectionist Construction of Concepts"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Dennett (Daniel) - Cognitive Wheels: The Frame Problem of AI"

Source: Dennett - Brainchildren - Essays on Designing Minds

COMMENT: Also in "Boden (Margaret), Ed. - The Philosophy of Artificial Intelligence"



"Dreyfus (Hubert L.) & Dreyfus (S.D.) - Making a Mind Versus Modelling the Brain: Artificial Intelligence Back At a Branch-point"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Hayes (Patrick J.) - The Naïve Physics Manifesto"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Hinton (Geoffrey E.), McClelland (James) & Rumelhart (David) - Distributed Representations"

Source: Rumelhart, McClelland, Etc - Parallel Distributed Processing, Vol 1

COMMENT: Also in "Boden (Margaret), Ed. - The Philosophy of Artificial Intelligence"



"Marr (David) - Artificial Intelligence: A Personal View"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"McCulloch (Warren S.) & Pitts (Walter H.) - A Logical Calculus of the Ideas Immanent in Nervous Activity"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"McDermott (Drew) - A Critique of Pure Reason"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Newell (Allen) & Simon (Herbert) - Computer Science as Empirical Enquiry: Symbols and Search"

Source: Haugeland - Mind Design II

COMMENT: Also in "Boden (Margaret), Ed. - The Philosophy of Artificial Intelligence"



"Searle (John) - Minds, Brains, and Programs"

Source: Behavioral and Brain Sciences, Volume 3 - Issue 3 - September 1980, pp. 417-424
Write-up Note1 (Full Text reproduced below).

Philosophers Index Abstract
  1. I distinguish between strong and weak artificial intelligence (AI).
  2. According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the problems are psychological theories.
  3. I argue that strong AI must be false, since a human agent could instantiate the program and still not have the appropriate mental states.
  4. I examine some arguments against this claim, and I explore some consequences of the fact that human and animal brains are the causal bases of existing phenomena.

BBS-Online
  • This article can be viewed as an attempt to explore the consequences of two propositions.
    1. Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality.
    2. Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim
  • The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.
  • These two propositions have the following consequences
    1. The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2.
    2. Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1.
    3. Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.

Another Abstract
  1. "Could a machine think?"
  2. On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains.
  3. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.


COMMENT:

Write-up3 (as at 31/08/2017 19:35:02): Searle - Minds, Brains, and Programs

This note provides my detailed review of "Searle (John) - Minds, Brains, and Programs". As it was originally only available in pdf form, it was presumably written when I was an undergraduate in 2002 or thereabouts.

Abstract (Aims of Searle’s Paper)
  • The aim of Searle’s paper is to show that instantiating a computer program is never in itself sufficient for intentionality.
  • The form of his argument is to show by a thought-experiment that a human agent could instantiate (“run”) a program, yet still not have the relevant intentionality (knowledge of Chinese).
  • Searle thinks that the causal4 features of the brain are critical for intentionality (and other aspects of mentality such as consciousness). That is, the hardware (or wetware) is critical and has to be of an appropriate sort. The software isn’t enough, though Searle agrees that human beings do instantiate lots of programs.
  • Hence, attempts at AI need to concentrate on duplicating the causal powers of the brain, and not just on programming. While, only a machine can think (programs can’t), it has to be a special machine physically and not just functionally similar to a brain.
  • Note: Intentionality is what thoughts are about or directed on – the semantic rather than syntactic aspect of thought. Searle denies that digital computers have any intentionality or semantic aspect, operating merely at the syntactic level. The programmer supplies the meaning when (s)he encodes the input or interprets the output. Computers just manipulate meaningless symbols which, for it, signify nothing.

Introduction
  • Searle distinguishes between Strong and Weak AI. His argument is with Strong AI.
  • Weak AI: is fine for Searle, and simply claims that computers are powerful tools for running controlled psychological experiments to help explain the mind.
  • Strong AI: goes much further than this, claiming
    1. That an appropriately programmed computer really is a mind and does understand and
    2. That programs are the explanations for human cognition.

The Chinese Room Thought Experiment
  • I take it as read that we know the description of the (CR) thought-experiment.
  • Searle places himself in the room as a homunculus who really understands English, but only simulates an understanding of Chinese, yet he passes the Turing test for, and appears to understand, both. At least the room does, for the observers don’t know what’s in it. This is important for Searle, because he thinks we’re right to attribute intentionality on behaviourist grounds until we know that the relevant analogy doesn’t hold (ie. until we open the lid).
  • The existence of the homunculus is very important for Searle, because he thinks he knows in all situations what the homunculus understands. Even when, in response to objections, he has the homunculus internalise the contents of the room, it’s still the homunculus that has to do the understanding. His intuition is that it wouldn’t understand anything of Chinese. He says it’s a fact, because it’s him and he knows for a fact that he knows no Chinese and his operations in the CR don’t teach him any.

Searle’s Immediate Response
  • Searle thinks it’s obvious that he doesn’t understand a word of Chinese. Since (he says) he’s the computer in this case, computers never understand anything.
  • Since computers don’t understand anything, they provide no insight into human thought. The Searle-homunculus’s understanding of English and Chinese are not comparable because he only appears to understand Chinese, but does understand English.
  • Hence, the two claims of Strong AI are without foundation.
  • In particular, formal properties are insufficient – the homunculus follows all the formal rules, yet understands nothing.
  • Searle rejects “fancy footwork” about “understanding” – this comes up later as well. Understanding has to be a true mental property, not a figure of speech such as an adding machine “knowing” how to add up.
  • The crux of the matter is whether Searle’s though-experiment is relevant and a true parallel to what Strong AI claims. So …

AI Responses to the CR Thought Experiment and Searle’s Replies
  • There are 6 of these, though Searle thinks the last two are hardly worth mentioning.
  • |1|The Systems Reply
    • AI Response: Intentionality is to be attributed to the whole room, not just to the homunculus.
    • Searle’s Reply: The homunculus can internalise the program and do its calculations in its head, but would still not understand Chinese, and there isn’t anything left over that could. This apart, he thinks only someone in the grip of an ideology could imagine that the conjunction of a person and bits of paper could understand Chinese if the person himself didn’t5. Searle denies that the CR is processing information (only symbols), so there is a parallel with stomachs and such-like which process food, but which aren’t minds. Searle claims that any theory that alleges intentionality to thermostats has fallen victim to a reductio ad absurdum. The whole purpose of the project was to find the difference between things that have minds (humans) and those that don’t (thermostats), so if we start attributing minds to thermostats, we’ve gone wrong somewhere.

    |1|The Robot Reply
    • AI Response: we need to embed the CR in a robot that responds to its environment. This would have intentionality.
    • Searle’s Reply: firstly, Searle notes in this reply an admission that intentionality isn’t just formal symbol manipulation, but involves causal interaction with the world. But, in any case, he just re-runs his thought experiment. The homunculus still doesn’t need to know where his inputs are coming from nor what his outputs are doing, so we’re no better off.

    |1|The Brain Simulator Reply
    • AI Response: forget the simplistic information-processing program and build one that simulates the brain at the level of synapses, including parallel processing. If this machine couldn’t be said to understand Chinese, nor could native Chinese speakers.
    • Searle’s Reply:
      1. Searle thinks this undermines the whole point of AI, which is that to understand the mind we don’t need to understand how the brain works, because – important slogan – the mind is to the brain as the program is to the hardware, and programs can be instantiated on any hardware we like provided it can run them.
      2. Even so, Searle can elaborate on his CR with his homunculus operating a hydraulic system that’s connected up like the brain. He still wouldn’t understand any Chinese.
      3. Our homunculus could even internalise all this in his imagination and be no better off – this counters the Systems Reply to this response. Again, formal properties aren’t enough for understanding.

    |1|The Combination Reply
    • AI Response: this imagines a simulator at the synapse level crammed into a robot that looks or at least acts like a human being. We’d surely ascribe intentionality to such a system.
    • Searle’s Reply:
      1. Searle agrees we would, but denies that this helps the Strong AI cause. We’re attributing intentionality to the robot on the basis of the Turing test, which Searle denies is a sure sign of intentionality. If we knew that it was a robot – at least in the sense that there was a man inside fiddling with a hydraulic system – we’d no longer make this attribution but treat it as an ingenious mechanical dummy.
      2. Searle makes the important (but debatable) point that the reason we attribute intentionality to apes and dogs is not merely for behaviourist reasons but because they’re made of the same “causal stuff” as we are.

    |1|The Other Minds Reply
    • AI Response: we only know other people have minds by their behaviour, so if computers pass the Turing test we have to attribute intentionality to them.
    • Searle’s Reply: Searle doesn’t give his “causal stuff” response, but claims that metaphysics is the issue not epistemology. We’re supposing the reality of the mental, and he thinks he’s shown that computational processing plus inputs and outputs can carry on in the absence of mental states (and hence is no mark of the mental).

    |1|The Many Mansions Reply
    • AI Response: we’re not there with the right hardware yet, but eventually we will be. Such machines will have intentionality.
    • Searle’s Reply: fine, maybe so, but this has nothing to do with Strong AI.
    |99|

Searle’s Conclusion
  • Searle agrees that our bodies + brains are machines, so he’s got no problem in principle with machines understanding Chinese. What he denies is that mere instantiations of computer programs have understanding. The organism & its biology is crucial.
  • He thinks its an empirical question whether aliens might have intentionality even though they have brains made of different stuff6.
  • No formal model is of itself sufficient for intentionality. Even were native Chinese speakers running the CR program, by instantiating it in the CR we end up with no understanding, despite the program, so the program isn’t enough.

Questions and Answers
  • The important negative answer is to that suggesting that a computer could be made to think solely on the basis of running the right program. Syntax with no semantics isn’t enough.
  • He makes the important distinction between simulation and duplication. Computer simulations of storms don’t make us wet, so why should simulations of understanding understand?

Rationalisations for the deceptive attractiveness of Strong AI
  • Confusion about Information Processing: an AI response to the simulation versus duplication argument above is that the appropriately programmed computer stands in a special relation to the mind/brain because the information processed by it and the mind/brain is the same, and AI claims that information processing is the essence of the mental. The simulation of a mind is a mind, even though the simulation of a storm isn’t a storm. Searle claims that since computers operate at the syntactical rather than semantic level, they don’t process information in the way human beings do. He sees a dilemma. Either we treat information as fundamentally at the semantic level, so that computers don’t process it, or we treat it at the syntactic level, so that thermostats do. He treats it as a reductio ad absurdum to attribute mental states to thermostats.
  • Residual Behaviourism: Searle rehearses his rejection of the Turing test and the attribution of intentional states to adding machines.
  • Residual Dualism: what matters to Strong AI and Functionalism is the program, which could be realised by a computer, a Cartesian mental substance or a Hegelian world spirit7. The whole rationale behind strong AI is that the mind is separable from the brain, both conceptually and empirically. Searle admits that Strong AI is not substance-dualist, but is dualist in disconnecting mind from brain. The brain just happens to be one type of machine capable of instantiating the mind-program. Searle finds the AI literature’s fulminations against dualism amusing on this account8.

Could a Machine Think?
  • Only machines can think, and it’s the hardware rather than the software that’s important. Intentionality is a biological phenomenon.




In-Page Footnotes ("Searle (John) - Minds, Brains, and Programs")

Footnote 3:
  • This is the write-up as it was when this Abstract was last output, with text as at the timestamp indicated (31/08/2017 19:35:02).
  • Books / Papers Citing this Book.
Footnote 4: Searle mentions this a lot, but doesn’t really explain what he means (or, if he does, I missed it!).

Footnote 5:
  • We might ask whether a person who did operate in this way could do so without (thereby) knowing Chinese.
  • Segal claims that the program isn’t a “Chinese speaking” program but a “Chinese question answering” program.
Footnote 6: But how would Searle know they had intentionality?

Footnote 7:
  • Maybe so, but programs can’t run themselves, and the essence of the Cartesian thought experiment for mind being a separate substance to matter is that we can supposedly imagine disembodied minds. We can’t imagine programs running without hardware.
Footnote 8:
  • This doesn’t seem to rationalise the appeal of Strong AI, but rather to introduce an invalid “guilt by association” argument against proponents of Strong AI.



"Sloman (Aaron) - Motives, Mechanisms, and Emotions"

Source: Boden - The Philosophy of Artificial Intelligence - Oxford Readings



"Turing (Alan) - Computing Machinery and Intelligence"

Source: Mind, Vol. 59, No. 236 (Oct., 1950), pp. 433-460


Philosophers Index Abstract
  1. In this article the author considers the question "can machines think?"
  2. The import of the discussion is on "imitation intelligence" as the author proposes that the best strategy for a machine to have is one that tries to provide answers that would naturally be given by man.
    → (Staff)

Sections
  1. The Imitation Game
  2. Critique of the New Problem
  3. The Machines concerned in the Game
  4. Digital Computers
  5. Universality of Digital Computers
  6. Contrary Views on the Main Question
    1. The Theological Objection
    2. The 'Heads in the Sand' Objection
    3. The Mathematical Objection
    4. The Argument from Consciousness
    5. Arguments from Various Disabilities
    6. Lady Lovelace's Objection
    7. Argument from Continuity in the Nervous System
    8. The Argument from Informality of Behaviour
    9. The Argument from Extra-Sensory Perception
  7. Learning Machines

Author’s Introduction – The Imitation Game
  1. I propose to consider the question, 'Can machines think? This should begin with definitions of the meaning of the terms 'machine' and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ' Can machines think ?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
  2. The new form of the problem can be described in terms of a game which we call the 'imitation game'. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either 'X is A and Y is B' or 'X is B and Y is A'. The interrogator is allowed to put questions to A and B thus:
      C: Will X please tell me the length of his or her hair?
    Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification.
  3. His answer might therefore be 'My hair is shingled, and the longest strands are about nine inches long.'
  4. In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as 'I am the woman, don't listen to him!' to her answers, but it will avail nothing as the man can make similar remarks.
  5. We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman ? These questions replace our original, 'Can machines think?


COMMENT:




In-Page Footnotes ("Turing (Alan) - Computing Machinery and Intelligence")

Footnote 1:
  • Sections 1, 2 and 6 are given in full, together with the first half of Section 3 and a late paragraph from Section 5 appended thereto.
  • Sections 4 and 7 are entirely omitted.
  • It is not made clear to the reader that this is the case.


Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2019
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - August 2019. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page