Minds, Brains, and Programs
Searle (John)
Source: Rosenthal - The Nature of Mind
Paper - Abstract

Paper SummaryBooks / Papers Citing this PaperNotes Citing this PaperLink to Latest Write-Up NoteText Colour-Conventions


Philosophers Index Abstract

  1. I distinguish between strong and weak artificial intelligence (AI).
  2. According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the problems are psychological theories.
  3. I argue that strong AI must be false, since a human agent could instantiate the program and still not have the appropriate mental states.
  4. I examine some arguments against this claim, and I explore some consequences of the fact that human and animal brains are the causal bases of existing phenomena.

BBS-Online
Another Abstract
  1. "Could a machine think?"
  2. On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains.
  3. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

Comment:


Write-up2 (as at 31/08/2017 19:35:02): Searle - Minds, Brains, and Programs

This note provides my detailed review of "Searle (John) - Minds, Brains, and Programs". As it was originally only available in pdf form, it was presumably written when I was an undergraduate in 2002 or thereabouts.

Abstract (Aims of Searle’s Paper)
  • The aim of Searle’s paper is to show that instantiating a computer program is never in itself sufficient for intentionality.
  • The form of his argument is to show by a thought-experiment that a human agent could instantiate (“run”) a program, yet still not have the relevant intentionality (knowledge of Chinese).
  • Searle thinks that the causal3 features of the brain are critical for intentionality (and other aspects of mentality such as consciousness). That is, the hardware (or wetware) is critical and has to be of an appropriate sort. The software isn’t enough, though Searle agrees that human beings do instantiate lots of programs.
  • Hence, attempts at AI need to concentrate on duplicating the causal powers of the brain, and not just on programming. While, only a machine can think (programs can’t), it has to be a special machine physically and not just functionally similar to a brain.
  • Note: Intentionality is what thoughts are about or directed on – the semantic rather than syntactic aspect of thought. Searle denies that digital computers have any intentionality or semantic aspect, operating merely at the syntactic level. The programmer supplies the meaning when (s)he encodes the input or interprets the output. Computers just manipulate meaningless symbols which, for it, signify nothing.

Introduction
  • Searle distinguishes between Strong and Weak AI. His argument is with Strong AI.
  • Weak AI: is fine for Searle, and simply claims that computers are powerful tools for running controlled psychological experiments to help explain the mind.
  • Strong AI: goes much further than this, claiming
    1. That an appropriately programmed computer really is a mind and does understand and
    2. That programs are the explanations for human cognition.

The Chinese Room Thought Experiment
  • I take it as read that we know the description of the (CR) thought-experiment.
  • Searle places himself in the room as a homunculus who really understands English, but only simulates an understanding of Chinese, yet he passes the Turing test for, and appears to understand, both. At least the room does, for the observers don’t know what’s in it. This is important for Searle, because he thinks we’re right to attribute intentionality on behaviourist grounds until we know that the relevant analogy doesn’t hold (ie. until we open the lid).
  • The existence of the homunculus is very important for Searle, because he thinks he knows in all situations what the homunculus understands. Even when, in response to objections, he has the homunculus internalise the contents of the room, it’s still the homunculus that has to do the understanding. His intuition is that it wouldn’t understand anything of Chinese. He says it’s a fact, because it’s him and he knows for a fact that he knows no Chinese and his operations in the CR don’t teach him any.

Searle’s Immediate Response
  • Searle thinks it’s obvious that he doesn’t understand a word of Chinese. Since (he says) he’s the computer in this case, computers never understand anything.
  • Since computers don’t understand anything, they provide no insight into human thought. The Searle-homunculus’s understanding of English and Chinese are not comparable because he only appears to understand Chinese, but does understand English.
  • Hence, the two claims of Strong AI are without foundation.
  • In particular, formal properties are insufficient – the homunculus follows all the formal rules, yet understands nothing.
  • Searle rejects “fancy footwork” about “understanding” – this comes up later as well. Understanding has to be a true mental property, not a figure of speech such as an adding machine “knowing” how to add up.
  • The crux of the matter is whether Searle’s though-experiment is relevant and a true parallel to what Strong AI claims. So …

AI Responses to the CR Thought Experiment and Searle’s Replies
  • There are 6 of these, though Searle thinks the last two are hardly worth mentioning.
    1. The Systems Reply
      • AI Response: Intentionality is to be attributed to the whole room, not just to the homunculus.
      • Searle’s Reply: The homunculus can internalise the program and do its calculations in its head, but would still not understand Chinese, and there isn’t anything left over that could. This apart, he thinks only someone in the grip of an ideology could imagine that the conjunction of a person and bits of paper could understand Chinese if the person himself didn’t4. Searle denies that the CR is processing information (only symbols), so there is a parallel with stomachs and such-like which process food, but which aren’t minds. Searle claims that any theory that alleges intentionality to thermostats has fallen victim to a reductio ad absurdum. The whole purpose of the project was to find the difference between things that have minds (humans) and those that don’t (thermostats), so if we start attributing minds to thermostats, we’ve gone wrong somewhere.
    2. The Robot Reply
      • AI Response: we need to embed the CR in a robot that responds to its environment. This would have intentionality.
      • Searle’s Reply: firstly, Searle notes in this reply an admission that intentionality isn’t just formal symbol manipulation, but involves causal interaction with the world. But, in any case, he just re-runs his thought experiment. The homunculus still doesn’t need to know where his inputs are coming from nor what his outputs are doing, so we’re no better off.
    3. The Brain Simulator Reply
      • AI Response: forget the simplistic information-processing program and build one that simulates the brain at the level of synapses, including parallel processing. If this machine couldn’t be said to understand Chinese, nor could native Chinese speakers.
      • Searle’s Reply:
        1. Searle thinks this undermines the whole point of AI, which is that to understand the mind we don’t need to understand how the brain works, because – important slogan – the mind is to the brain as the program is to the hardware, and programs can be instantiated on any hardware we like provided it can run them.
        2. Even so, Searle can elaborate on his CR with his homunculus operating a hydraulic system that’s connected up like the brain. He still wouldn’t understand any Chinese.
        3. Our homunculus could even internalise all this in his imagination and be no better off – this counters the Systems Reply to this response. Again, formal properties aren’t enough for understanding.
    4. The Combination Reply
      • AI Response: this imagines a simulator at the synapse level crammed into a robot that looks or at least acts like a human being. We’d surely ascribe intentionality to such a system.
      • Searle’s Reply:
        1. Searle agrees we would, but denies that this helps the Strong AI cause. We’re attributing intentionality to the robot on the basis of the Turing test, which Searle denies is a sure sign of intentionality. If we knew that it was a robot – at least in the sense that there was a man inside fiddling with a hydraulic system – we’d no longer make this attribution but treat it as an ingenious mechanical dummy.
        2. Searle makes the important (but debatable) point that the reason we attribute intentionality to apes and dogs is not merely for behaviourist reasons but because they’re made of the same “causal stuff” as we are.
    5. The Other Minds Reply
      • AI Response: we only know other people have minds by their behaviour, so if computers pass the Turing test we have to attribute intentionality to them.
      • Searle’s Reply: Searle doesn’t give his “causal stuff” response, but claims that metaphysics is the issue not epistemology. We’re supposing the reality of the mental, and he thinks he’s shown that computational processing plus inputs and outputs can carry on in the absence of mental states (and hence is no mark of the mental).
    6. The Many Mansions Reply
      • AI Response: we’re not there with the right hardware yet, but eventually we will be. Such machines will have intentionality.
      • Searle’s Reply: fine, maybe so, but this has nothing to do with Strong AI.

Searle’s Conclusion
  • Searle agrees that our bodies + brains are machines, so he’s got no problem in principle with machines understanding Chinese. What he denies is that mere instantiations of computer programs have understanding. The organism & its biology is crucial.
  • He thinks its an empirical question whether aliens might have intentionality even though they have brains made of different stuff5.
  • No formal model is of itself sufficient for intentionality. Even were native Chinese speakers running the CR program, by instantiating it in the CR we end up with no understanding, despite the program, so the program isn’t enough.

Questions and Answers
  • The important negative answer is to that suggesting that a computer could be made to think solely on the basis of running the right program. Syntax with no semantics isn’t enough.
  • He makes the important distinction between simulation and duplication. Computer simulations of storms don’t make us wet, so why should simulations of understanding understand?

Rationalisations for the deceptive attractiveness of Strong AI
  • Confusion about Information Processing: an AI response to the simulation versus duplication argument above is that the appropriately programmed computer stands in a special relation to the mind/brain because the information processed by it and the mind/brain is the same, and AI claims that information processing is the essence of the mental. The simulation of a mind is a mind, even though the simulation of a storm isn’t a storm. Searle claims that since computers operate at the syntactical rather than semantic level, they don’t process information in the way human beings do. He sees a dilemma. Either we treat information as fundamentally at the semantic level, so that computers don’t process it, or we treat it at the syntactic level, so that thermostats do. He treats it as a reductio ad absurdum to attribute mental states to thermostats.
  • Residual Behaviourism: Searle rehearses his rejection of the Turing test and the attribution of intentional states to adding machines.
  • Residual Dualism: what matters to Strong AI and Functionalism is the program, which could be realised by a computer, a Cartesian mental substance or a Hegelian world spirit6. The whole rationale behind strong AI is that the mind is separable from the brain, both conceptually and empirically. Searle admits that Strong AI is not substance-dualist, but is dualist in disconnecting mind from brain. The brain just happens to be one type of machine capable of instantiating the mind-program. Searle finds the AI literature’s fulminations against dualism amusing on this account7.

Could a Machine Think?
  • Only machines can think, and it’s the hardware rather than the software that’s important. Intentionality is a biological phenomenon.



In-Page Footnotes

Footnote 2:
  • This is the write-up as it was when this Abstract was last output, with text as at the timestamp indicated (31/08/2017 19:35:02).
  • Link to Latest Write-Up Note.
Footnote 3: Searle mentions this a lot, but doesn’t really explain what he means (or, if he does, I missed it!).

Footnote 4:
  • We might ask whether a person who did operate in this way could do so without (thereby) knowing Chinese.
  • Segal claims that the program isn’t a “Chinese speaking” program but a “Chinese question answering” program.
Footnote 5: But how would Searle know they had intentionality?

Footnote 6:
  • Maybe so, but programs can’t run themselves, and the essence of the Cartesian thought experiment for mind being a separate substance to matter is that we can supposedly imagine disembodied minds. We can’t imagine programs running without hardware.
Footnote 7:
  • This doesn’t seem to rationalise the appeal of Strong AI, but rather to introduce an invalid “guilt by association” argument against proponents of Strong AI.

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2017
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - October 2017. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page