- Searle would have us believe that the present-day inhabitants of respectable universities have succumbed to the Faustian dream. (Mephistopheles: "What is it then?" Wagner: "A man is in the making.") He assures us, with a straight face, that some contemporary scholars think that "an appropriately programmed computer really is a mind," that such artificial creatures "literally have cognitive states." The real thing indeed! But surely no one could believe this? I mean, if someone did, then wouldn't he want to give his worn-out IBM a decent burial and say Kaddish for it? And even if some people at Yale, Berkeley, Stanford, and so forth do instantiate these weird belief states, what conceivable scientific interest could that hold? Imagine that they were right, and that their computers really do perceive, understand, and think. All that our Golem makers have done on Searle's story is to create yet another mind. If the sole aim is to "reproduce" (Searle's term, not mine) mental phenomena, there is surely no need to buy a computer.
- Frankly, I just don't care what some members of the AI community think about the ontological status of their creations. What I do care about is whether anyone can produce principled, revealing accounts of, say, the perception of tonal music (Longuet-Higgins 1979), the properties of stereo vision (Marr & Poggio 1979), and the parsing of natural language sentences (Thorne 1968). Everyone that I know who tinkers around with computers does so because he has an attractive theory of some psychological capacity and wishes to explore certain consequences of the theory algorithmically. Searle refers to such activity as "weak AI," but I would have thought that theory construction and testing was one of the stronger enterprises that a scientist could indulge in. Clearly, there must be some radical misunderstanding here.
- The problem appears to lie in Searle's (or his AI informants') strange use of the term 'theory.' Thus Searle writes in his shorter abstract: "According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the programs are psychological theories." Ignoring momentarily the "and therefore," which introduces a simple non sequitur, how could a program be a theory? As Moor (1978) points out, a theory is (at least) a collection of related propositions which may be true or false, whereas a program is (or was) a pile of punched cards. For all I know, maybe suitably switched-on computers do "literally" have cognitive states, but even if they did, how could that possibly licence the inference that the program per se was a psychological theory? What would one make of an analogous claim applied to physics rather than psychology? "Appropriately programmed computers literally have physical states, and therefore the programs are theories of matter" doesn't sound like a valid inference to me. Moor's exposition of the distinction between program and theory is particularly clear and thus worth quoting at some length:
A program must be interpreted in order to generate a theory. In the process of interpreting, it is likely that some of the program will be discarded as irrelevant since it will be devoted to the technicalities of making the program acceptable to the computer. Moreover, the remaining parts of the program must be organized in some coherent fashion with perhaps large blocks of the computer program taken to represent specific processes. Abstracting a theory from the program is not a simple matter, for different groupings of the program can generate different theories. Therefore, to the extent that a program, understood as a model, embodies one theory, it may well embody many theories. (Moor 1978, p. 215)
- Searle reports that some of his informants believe that running programs are other minds, albeit artificial ones; if that were so, would these scholars not attempt to construct theories of artificial minds, just as we do for natural ones? Considerable muddle then arises when Searle's informants ignore their own claim and use the terms 'reproduce' and 'explain' synonymously: "The project is to reproduce and explain the mental by designing programs." One can see how hopelessly confused this is by transposing the argument back from computers to people. Thus I have noticed that many of my daughter's mental states bear a marked resemblance to my own; this has arisen, no doubt, because part of my genetic plan was used ot build her hardware and because I have shared in the responsibility of programming her. All well and good, but it would be straining credulity to regard my daughter as "explaining" me, as being a "theory" of me.
- What one would like is an elucidation of the senses in which programs, computers and other machines do and don't figure in the explanation of behavior (Cummins 1977; Marshall 1977). It is a pity that Searle disregards such questions in order to discuss the everyday use of mental vocabulary, an enterprise best left to lexicographers. Searle writes: "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't." Well, perhaps it does start there, but that is no reason to suppose it must finish there. How would such an "argument" fare in natural philosophy? "The study of physics starts with such facts as that tables are solid objects without holes in them, whereas Gruyere cheese. . . ." Would Searle now continue that "If you get a theory that denies this point, you have produced a counterexample to the theory and the theory is wrong"? Of course a thermostat's "belief" that the temperature should be a little higher is not the same kind of thing as my "belief" that it should. It would be totally uninteresting if they were the same. Surely the theorist who compares the two must be groping towards a deeper parallel; he has seen an analogy that may illuminate certain aspects of the control and regulation of complex systems. The notion of positive and negative feedback is what makes thermostats so appealing to Alfred Wallace and Charles Darwin, to Claude Bernard and Walter Cannon, to Norbert Wiener and Kenneth Craik. Contemplation of governors and thermostats has enabled them to see beyond appearances to a level at which there are profound similarities between animals and artifacts (Marshall 1977). It is Searle, not the theoretician, who doesn't really take the enterprise seriously. According to Searle, "what we wanted to know is what distinguishes the mind from thermostats and livers," Yes, but that is not all; we also want to know at what levels of description there are striking resemblances between disparate phenomena.
In the opening paragraphs of Leviathan, Thomas Hobbes (1651, p. 8) gives clear expression to the mechanist's philosophy: Nature, the art whereby God hath made and governs the world, is by the art of man, as in many other things, in this also imitated, that it can make an artificial animal. . . . For what is the heart but a spring, and the nerves so many strings; and joints so many wheels giving motion to the whole body, such as was intended by the artificer? What is the notion of "imitation" that Hobbes is using here? Obviously not the idea of exact imitation or copying. No one would confuse a cranial nerve with a piece of string, a heart with the mainspring of a watch, or an ankle with a wheel. There is no question of trompe I'oeil. The works of the scientist are not in that sense reproductions of nature; rather they are attempts to see behind the phenomenological world to a hidden reality. It was Galileo, of course, who articulated this paradigm most forcefully: sculpture, remarks Galileo,
is "closer to nature" than painting in that the material substratum manipulated by the sculptor shares with the matter manipulated by nature herself the quality of three-dimensionality. But does this fact rebound to the credit of sculpture? On the contrary, says Galileo, it greatly "diminishes its merit": What will be so wonderful in imitating sculptress Nature by sculpture itself?" And he concludes: "The most artistic imitation is that which represents the three-dimensional by its opposite, which is the plane." (Panofsky 1954, p. 97) Galileo summarizes his position in the following words: "The further removed the means of imitation are from the thing to be imitated, the more worthy of admiration the imitation will be" (Panofsky 1954). In a footnote to the passage, Panofsky remarks on "the basic affinity between the spirit of this sentence and Galileo's unbounded admiration for Aristarchus and Copernicus 'because they trusted reason rather than sensory experience" (Panofsky 1954).
- Now Searle is quite right in pointing out that in AI one seeks to model cognitive states and their consequences (the real thing) by a formal syntax, the interpretation of which exists only the eye of the beholder. Precisely therein lies the beauty and significance of the enterprise - to try to provide a counterpart for each substantive distinction with a syntactic one. This is essentially to regard the study of the relationships between physical transactions and symbolic operations as an essay in cryptanalysis (Freud 1895; Cummins 1977). The interesting question then arises as to whether there is a unique mapping between the formal elements of the system and their "meanings" (Householder 1962).
- Searle, however, seems to be suggesting that we abandon entirely both the Galilean and the "linguistic" mode in order merely to copy cognitions. He would apparently have us seek mind only in "neurons with axons and dendrites," although he admits, as an empirical possibility, that such objects might "produce consciousness, intentionality and all the rest of it using some other sorts of chemical principles than human beings use." But this admission gives the whole game away. How would Searle know that he had built a silicon-based mind (rather than our own carbon-based mind) except by having an appropriate abstract (that is, nonmaterial) characterization of what the two life forms hold in common? Searle finesses this problem by simply "attributing" cognitive states to himself, other people, and a variety of domestic animals: "In 'cognitive sciences' one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects." But this really won't do: we are, after all, a long way from having any very convincing evidence that cats and dogs have "cognitive states" in anything like Searle's use of the term |See "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978].
- Thomas Huxley (1874, p. 156) poses the question in his paraphrase of Nicholas Malebranche's orthodox Cartesian line: "What proof is there that brutes are other than a superior race of marionettes, which eat without pleasure, cry without pain, desire nothing, know nothing, and only simulate intelligence as a bee simulates a mathematician?" Descartes' friend and correspondent, Marin Mersenne, had little doubt about the answer to this kind of question. In his discussion of the perceptual capacities of animals he forthrightly denies mentality to the beasts:
Animals have no knowledge of these sounds, but only a representation, without knowing whether what they apprehend is a sound or a color or something else; so one can say that they do not act so much as are acted upon, and that the objects make an impression upon their senses from which their action necessarily follows, as the wheels of a clock necessarily follow the weight or spring which drives them. (Mersenne 1636) For Mersenne, then, the program inside animals is indeed an uninterpreted calculus, a syntax without a semantics |See Fodor: "Methodological Solipsism" BBS 3(1) 1980|. Searle, on the other hand, seems to believe that apes, monkeys, and dogs do "have mental states" because they "are made of similar stuff to ourselves" and have eyes, a nose, and skin. I fail to see how the datum supports the conclusion. One might have thought that some quite intricate reasoning and subtle experimentation would be required to justify the ascription of intentionality to chimpanzees (Marshall 1971: Woodruff & Premack 1979). That chimpanzees look quite like us is a rather weak fact on which to build such a momentous conclusion.
- When Jacques de Vaucanson - the greatest of all AI theorists - had completed his artificial duck he showed it, in all its naked glory of wood, string, steel, and wire. However much his audience may have preferred a more cuddly creature, Vaucanson firmly resisted the temptation to clothe it:
Perhaps some Ladies, or some People, who only like the Outside of Animals, had rather have seen the whole cover'd; that is the Duck with Feathers. But besides, that I have been desir'd to make every thing visible; I wou'd not be thought to impose upon the Spectators by any conceal'd or juggling Contrivance (Fryer & Marshall 1979). For Vaucanson, the theory that he has embodied in the model duck is the real thing.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)