- There are two sides to this commentary, the first that machines can embody somewhat more than Searle imagines, and the other that humans embody somewhat less. My conclusion will be that the two systems can in principle achieve similar levels of function.
- My response to Searle's Gedankenexperiment is a variant of the "robot reply": the robot simply needs more information, both environmental and a priori, than Searle is willing to give to it. The robot can internalize meaning only if it can receive information relevant to a definition of meaning, that is, information with a known relationship to the outside world. First it needs some Kantian innate ideas, such as the fact that some input lines (for instance, inputs from the two eyes or from locations in the same eye) are topographically related to one another. In biological brains this is done with labeled lines. Some of the inputs, such as visual inputs, will be connected primarily with spatial processing programs while others such as auditory ones will be more closely related to temporal processing. Further, the system will be built to avoid some input strings (those representing pain, for example) and to seek others (water when thirsty). These properties and many more are built into the structure of human brains genetically, but can be built into a program as a data base just as well. It may be that the homunculus represented in this program would not know what's going on, but it would soon learn, because it has all of the information necessary to construct a representation of events in the outside world.
- My super robot would learn about the number five, for instance, in the same way that a child does, by interaction with the outside world where the occurrence of the string of symbols representing "five" in its visual or auditory inputs corresponds with the more direct experience of five of something. The fact that numbers can be coded in the computer in more economical ways is no more relevant than the fact that the number five is coded in the digits of a child's hand. Both a priori knowledge and environmental knowledge could be made similar in quantity and quality to that available to a human.
- Now I will try to show that human intentionality is not as qualitatively different from machine states as it might seem to an introspectionist. The brain is similar to a computer program in that it too receives only strings of input and produces only strings of output. The inputs are small O.1-volt signals entering in great profusion along afferent nerves, and the outputs are physically identical signals leaving the central nervous system on efferent nerves. The brain is deaf, dumb, and blind, so that the electrical signals (and a few hormonal messages which need not concern us here) are the only ways that the brain has of knowing about its world or acting upon it.
- The exception to this rule is the existing information stored in the brain, both that given in genetic development and that added by experience. But it too came without intentionality of the sort that Searle seems to require, the genetic information being received from long strings of DNA base sequences (clearly there is no intentionality here), and previous inputs being made up of the same streams of 0.1-volt signals that constitute the present input. Now it is clear that no neuron receiving any of these signals or similar signals generated inside the brain has any idea of what is going on. The neuron is only a humble machine which receives inputs and generates outputs as a function of the temporal and spatial relations of the inputs, and its own structural properties. To assert any further properties of brains is the worst sort of dualism.
- Searle grants that humans have intentionality, and toward the end of his article he also admits that many animals might have intentionality also. But how far down the phylogenetic scale is he willing to go [see "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 19781? Does a single-celled animal have intentionality? Clearly not, for it is only a simple machine which receives physically identifiable inputs and "automatically" generates reflex outputs. The hydra with a few dozen neurons might be explained in the same way, a simple nerve network with inputs and outputs that are restricted, relatively easy to understand, and processed according to fixed patterns. Now what about the mollusc with a few hundred neurons, the insect with a few thousand, the amphibian with a few million, or the mammal with billions? To make his argument convincing, Searle needs a criterion for a dividing line in his implicit dualism.
- We are left with a human brain that has an intention-free, genetically determined structure, on which are superimposed the results of storms of tiny nerve signals. From this we somehow introspect an intentionality that cannot be assigned to machines. Searle uses the example of arithmetic manipulations to show how humans "understand" something that machines don't. I submit that neither humans nor machines understand numbers in the sense Searle intends. The understanding of numbers greater than about five is always an illusion, for humans can deal with larger numbers only by using memorized tricks rather than true understanding. If I want to add 27 and 54, I don't use some direct numerical understanding or even a spatial or electrical analogue in my brain. Instead, I apply rules that I memorized in elementary school without really knowing what they meant, and combine these rules with memorized facts of addition of one-digit numbers to arrive at an answer without understanding the numbers themselves. Though I have the feeling that I am performing operations on numbers, in terms of the algorithms I use there is nothing numerical about it. In the same way I can add numbers in the billions, although neither I nor anyone else has any concept of what these numbers mean in terms of perceptually meaningful quantities. Any further understanding of the number system that I possess is irrelevant, for it is not used in performing simple computations.
- The illusion of having a consciousness of numbers is similar to the illusion of having a full-color, well focused visual field; such a concept exists in our consciousness, but the physiological reality falls far short of the introspection. High-quality color information is available only in about the central thirty degrees of the visual field, and the best spatial information in only one or two degrees. I suggest that the feeling of intentionality is a cognitive illusion similar to the feeling of the high quality visual image. Consciousness is a neurological system like any other, with functions such as the long-term direction of behavior (intentionality?), access to long-term memories, and several other characteristics that make it a powerful, though limited-capacity, processor of biologically useful information.
- All of Searle's replies to his Gedankenexperiment are variations on the theme that I have described here, that an adequately designed machine could include intentionality as an emergent quality even though individual parts (transistors, neurons, or whatever) have none. All of the replies have an element of truth, and their shortcomings are more in their failure to communicate the similarity of brains and machines to Searle than in any internal weaknesses. Perhaps the most important difference between brains and machines lies not in their instantiation but in their history, for humans have evolved to perform a variety of poorly understood functions including reproduction and survival in a complex social and ecological context. Programs, being designed without extensive evolution1, have more restricted goals and motivations.
- Searle's accusation of dualism in AI falls wide of the mark because the mechanist does not insist on a particular mechanism in the organism, but only that "mental" processes be represented in a physical system when the system is functioning. A program lying on a tape spool in a corner is no more conscious than a brain preserved in a glass jar, and insisting that the program if read into an appropriate computer would function with intentionality asserts only that the adequate machine consists of an organization imposed on a physical substrate. The organization is no more mentalistic than the substrate itself. Artificial intelligence is about programs rather than machines only because the process of organizing information and inputs and outputs into an information system has been largely solved by digital computers. Therefore, the program is the only step in the process left to worry about.
- Searle may well be right that present programs (as in Schank & Abelson 1977) do not instantiate intentionality according to his definition. The issue is not whether present programs do this but whether it is possible in principle to build machines that make plans and achieve goals. Searle has given us no evidence that this is not possible.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)