- Searle states that the main argument of his paper is directed at establishing his second proposition, that "instantiating a computer program is never by itself a sufficient condition of intentionality" (that is, of a mental state that includes beliefs, desires, and intentions). He accomplishes this with a Gedankenexperiment to show that even "a human agent could instantiate the program and still not have the relevant intentionality"; that is, Searle shows, in a masterful and convincing manner, that the behavior of the appropriately programmed computer could transpire in the absence of a cognitive mental state. I believe it is also possible to establish the proposition by means of an argument based on simple formal logic.
- We start with the knowledge that we are dealing with two different systems: system A is the computer, with its appropriate program; system B is the human being, particularly his brain. Even if system A could be arranged to behave and even to look like system B, in a manner that might make them indistinguishable to an external observer, system A must be at least internally different from B. If A and B were identical, they would both be human beings and there would be no thesis to discuss.
- Let us accept the proposal that, on an input-output basis, system A and system B could be made to behave alike, properties that we may group together under category X. The possession of the relevant mental states (including understanding, beliefs, desires, intentions, and the like) may be called property Y. We know that system B has property Y. Remembering that systems A and B are known to be different, it is an error in logic to argue that because systems A and B both have property X, they must also both have property Y.
- The foregoing leads to a more general proposition - that no behavior of a computer, regardless of how successful it may be in simulating human behavior, is ever by itself sufficient evidence of any mental state. Indeed, Searle also appears to argue for this more general case when, later in the discussion, he notes: (a) To get computers to feel pain or fall in love would be neither harder nor easier than to get them to have cognition, (b) "For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter." And, (c) "to confuse simulation with duplication1 is the same mistake, whether it is pain, love, cognition." On the other hand, Searle seems not to maintain this general proposition with consistency. In his discussion of "IV. The combination reply" (to his analytical example or thought experiment), Searle states: "If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would . . . find it rational and indeed irresistible to .. . attribute intentionality to it, pending some reason not to." On the basis of my argument, one would not have to know that the robot had a formal program (or whatever) that accounts for its behavior, in order not to have to attribute intentionality to it. All we need to know is that the robot's internal control apparatus is not made in the same way and out of the same stuff as is the human brain, to reject the thesis that the robot must possess the mental states of intentionality, and so on.
- Now, it is true that neither my nor Searle's argument excludes the possibility that an appropriately programmed computer could also have mental states (property Y); the argument merely states it is not warranted to propose that the robot must have mental states (Y). However, Searle goes on to contribute a valuable analysis of why so many people have believed that computer programs do impart a kind of mental process or state to the computer. Searle notes that, among other factors, a residual behaviorism or operationalism underlies the willingness to accept input-output patterns as sufficient for postulating human mental states in appropriately programmed computers. I would add that there are still many psychologists and perhaps philosophers who are similarly burdened with residual behaviorism or operationalism even when dealing with criteria for existence of a conscious subjective experience in human subjects (see Libet 1973; 1979).
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)