- Searle clearly states that the basis of his critical evaluation of AI is dependent on two propositions. The first is: "Intentionality in human beings (and animals) is a product of causal features of the brain." He supports this proposition by an unargued statement that it "is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality" (my italics).
- This is a dogma of the psychoneural identity theory, which is one variety of the materialist theories of the mind. There is no mention of the alternative hypothesis of dualist interactionism that Popper and I published some time ago (1977) and that I have further developed more recently (Eccles 1978; 1979). According to that hypothesis intentionality is a property of the self-conscious mind (World 2 of Popper), the brain being used as an instrument in the realization of intentions. I refer to Fig. E 7-2 of Popper and Eccles (1977), where intentions appear in the box (inner senses) of World 2, with arrows indicating the flow of information by which intentions in the mind cause changes in the liaison brain and so eventually in voluntary movements.
- I have no difficulty with proposition 2, but I would suggest that 3, 4, and 5 be rewritten with "mind" substituted for "brain." Again the statement: "only a machine could think, and only very special kinds of machines . . . with internal causal powers equivalent to those of brains" is the identity theory dogma. I say dogma becuase it is unargued and without empirical support. The identity theory is very weak empirically, being merely a theory of promise.
- So long as Searle speaks about human performance without regarding intentionality as a property of the brain, I can appreciate that he has produced telling arguments against the strong AI theory. The story of the hamburger with the Gectankenexperiment of the Chinese symbols is related to Premack's attempts to teach the chimpanzee Sarah a primitive level of human language as expressed in symbols [See Premack:"Does the Chimpanzee Have a Theory of Mind?" BBS 1(4) 1978]. The criticism of Lenneberg (1975) was that, by conditioning, Sarah had learnt a symbol game, using symbols instrumentally, but had no idea that it was related to human language. He trained high school students with the procedures described by Premack, closely replicating Premack's study. The human subjects were quickly able to obtain considerably lower error scores than those reported for the chimpanzee. However, they were unable to translate correctly a single one of their completed sentences into English. In fact, they did not understand that there was any correspondence between the plastic symbols and language; instead they were under the impression that their task was to solve puzzles.
- I think this simple experiment indicates a fatal flaw in all the AI work. No matter how complex the performance instantiated by the computer, it can be no more than a triumph for the computer designer in simulation. The Turing machine is a magician's dream - or nightmare! It was surprising that after the detailed brain-mind statements of the abstract, I did not find the word "brain" in Searle's text through the whole of his opening three pages of argument, where he uses mind, mental states, human understanding, and cognitive states exactly as would be done in a text on dualist interactionism. Not until "the robot reply" does brain appear as "computer 'brain.' " However, from "the brain simulator reply" in the statements and criticisms of the various other replies, brain, neuron firings, synapses, and the like are profusely used in a rather naive way. For example "imagine the computer programmed with all the synapses of a human brain" is more than I can do by many orders of magnitude! So "the combination reply" reads like fantasy - and to no purpose!
- I agree that it is a mistake to confuse simulation with duplication1. But I do not object to the idea that the distinction between the program and its realization in the hardware seems to be parallel to the distinction between the mental operations and the level of brain operations. However, Searle believes that the equation "mind is to brain as program is to hardware" breaks down at several points. I would prefer to substitute programmer for program, because as a dualist interactionist I accept the analogy that as conscious beings we function as programmers of our brains. In particular I regret Searle's third argument: "Mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer," and so later we are told "whatever else intentionality is, it is a biological phenomenon, and it is as likely to be causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomenon." I have the feeling of being transported back to the nineteenth century, where, as derisorily recorded by Sherrington (1950): "the oracular Professor Tyndall, presiding over the British Association at Belfast, told his audience that as the bile is a secretion of the liver, so the mind is a secretion of the brain."
- In summary, my criticisms arise from fundamental differences in respect of beliefs in relation to the brain-mind problem. So long as Searle is referring to human intentions and performances without reference to the brain-mind problem, I can appreciate the criticisms that he marshals against the AI beliefs that an appropriately programmed computer is a mind literally understanding and having other cognitive states. Most of Searle's criticisms are acceptable for dualist interactionism. It is high time that strong AI was discredited.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)