- Searle's arguments on the feasibility of computer understanding contain several simple but fatal logical flaws. I can deal only with the most important difficulties here. However, it is the general thrust of Searle's remarks rather than the technical flaws in his arguments that motivates this commentary. Searle' paper suggests that even the best simulation of intelligent behavior would explain nothing about cognition, and he argues in support of this claim. Since I would like to claim that computer simulation can yield important insights into the nature of human cognitive processes, it is important to show why Searle's arguments do not threaten this enterprise.
- My main objection to Searle's argument he has termed the "Berkeley systems reply." The position states that the man-in-the-room scenario presents no problem to a strong AI-er who claims that understanding is a property of an information-processing system. The man in the room with the ledger, functioning in the manner prescribed by the cognitive theorist who instructed his behavior, constitutes one such system. The man functioning in his normal everyday manner is another system. The "ordinary man" system may not understand Chinese, but this says nothing about the capabilities of the "man-in-the-room" system, which must therefore remain at least a candidate for consideration as an understander in view of its language-processing capabilities.
- Searle's response to this argument is to have the man internalize the "man-in-the-room" system by keeping all the rules and computations in his head. He now encompasses the whole system. Searle argues that if the man "doesn't understand, then there is no way the system could understand because the system is just a part of him."
- However, this is just plain wrong. Lots of systems (in fact, most interesting systems) are embodied in other systems of weaker capabilities. For example, the hardware of a computer may not be able to multiply polynomials, or sort lists, or process natural language, although programs written for those computers can; individual neurons probably don't have much - if any - understanding capability, although the systems they constitute may understand quite a bit.
- The difficulty in comprehending the systems position in the case of Searle's paradox is in being able to see the person as two separate systems. The following elaborations may be useful. Suppose we decided to resolve the issue once and for all simply by asking the person involved whether he understands Chinese. We hand the person a piece of paper with Chinese characters that mean (loosely translated) "Do you understand Chinese?" If the man-in-the-room system were to respond, by making the appropriate symbol manipulations, it would return a strip of paper with the message: "Of course I understand Chinese! What do you think I've been doing? Are you joking?" A heated dialogue then transpires, after which we apologize to the man-in-the-room system for our rude innuendos. Immediately thereafter, we approach the man himself (that is, we ask him to stop playing with the pieces of paper and talk to us directly) and ask him if he happens to know Chinese. He will of course deny such knowledge.
- Searle's mistake of identifying the experiences of one system with those of its implementing system is one philosophers often make when referring to AI systems. For example, Searle says that the English subsystem knows that "hamburgers" refer to hamburgers, but that the Chinese subsystem knows only about formal symbols. But it is really the homunculus who is conscious of symbol manipulation, and has no idea what higher level task he is engaged in. The parasitic system is involved in this higher level task, and has no knowledge at all that he is implemented via symbol manipulation, any more than we are aware of how our own cognitive processes are implemented.
- What's unusual about this situation is not that one system is embedded in a weak one, but that the implementing system is so much more powerful than it need be. That is, the homunculus is a full-fledged understander, operating at a small percentage of its capacity to push around some symbols. If we replace the man by a device that is capable of performing only these operations, the temptation to view the systems as identical greatly diminishes.
- It is important to point out, contrary to Searle's claim, that the systems position itself does not constitute a strong AI claim. It simply shows that if it is possible that a system other than a person functioning in the standard manner can understand, then the man-in-the-room argument is not at all problematic. If we deny this possibility to begin with, then the delicate man-in-the-room argument is unnecessary - a computer program is something other than a person functioning normally, and by assumption would not be capable of understanding.
- Searle also puts forth an argument about simulation in general. He states that since a simulation of a storm won't leave us wet, why should we assume that a simulation of understanding should understand? Well, the reason is that while simulations don't necessarily preserve all the properties of what they simulate, they don't necessarily violate particular properties either. I could simulate a storm in the lab by spraying water through a hose. If I'm interested in studying particular properties, I don't have to abandon simulations; I merely have to be careful about which properties the simulation I construct is likely to preserve.
- So it all boils down to the question, what sort of thing is understanding? If it is an inherently physical thing, like fire or rain or digestion, then preserving the logical properties of understanding will in fact not preserve the essential nature of the phenomenon, and a computer simulation will not understand. If, on the other hand, understanding is essentially a logical or symbolic type of activity, then preserving its logical properties would be sufficient to have understanding, and a computer simulation will literally understand.
- Searle's claim is that the term "understanding" refers to a physical phenomenon, much in the same way that the term "photosynthesis" does. His argument here is strictly an appeal to our intuitions about the meaning of this term. My own intuitions simply do not involve the causal properties of biological organisms (although they do involve their logical and behavioral properties). It seems to me that this must be true for most people, as most people could be fooled into thinking that a computer simulation really understands, but a simulation of photosynthesis would not fool anyone into thinking it had actually created sugar from water and carbon dioxide.
- A major theme in Searle's paper is that intentionality is really at the bottom of the problem. Computers fail to meet the criteria of true understanding because they just don't have intentional states, with all that entails. This, according to Searle, is in fact what boggles one's intuitions in the man-in-the-room example.
- However, it seems to me that Searle's argument has nothing to do with intentionality at all. What causes difficultly in attributing intentional states to machines is the fact that most of these states have a subjective nature as well. If this is the case, then Searle's man-in-the-room example could be used to simulate a person having some non-intentional but subjective state, and still have its desired effect. This is precisely what happens. For example, suppose we simulated someone undergoing undirected anxiety. It's hard to believe that anything - the man doing the simulation or the system he implements is actually experiencing undirected anxiety, even though this is not an intentional state.
- Furthermore, the experience of discomfort seems proportional to subjectivity, but independent of intentionality. It doesn't bother my intuitions much to hear that a computer can understand or know something; that it is believing something is a little harder to swallow, and that it has love, hate, rage, pain, and anxiety are much worse. Notice that the subjectivity seems to increase in each case, but the intentionality remains the same. The point is that Searle's argument has nothing to do with intentionality per se, and sheds no light on the nature of intentional states or on the kinds of mechanisms capable of having them.
- I'd like to sum up by saying one last word on Searle's man-in-the-room experiment, as this forms the basis for most of his subsequent arguments. Woody Allen in Without Feathers describes a mythical beast called the Great Roe. The Great Roe has the head of a lion, and the body of a lion - but not the same lion. Searle's Gedankenexperiment is really a Great Roe - the head of an understander and the body of an understander, but not the same understander. Herein lies the difficulty.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2019
- Mauve: Text by correspondent(s) or other author(s); © the author(s)