Full Text- Searle claims that the apparently commonsensical programs of the Yale AI project really don't display meaningful understanding of text. For him, the computer processing a story about a restaurant visit is just a Chinese symbol manipulator blindly applying uncomprehended rules to uncomprehended text. What is missing, Searle says, is the presence of intentional states.
- Searle is misguided in this criticism in at least two ways. First of all, it is no trivial matter to write rules to transform the "Chinese symbols" of a story text into the "Chinese symbols" of appropriate answers to questions about the story. To dismiss this programming feat as mere rule mongering is like downgrading a good piece of literature as something that British Museum monkeys can eventually produce. The programmer needs a very crisp understanding of the real work to write the appropriate rules. Mediocre rules produce feeble-minded output, and have to be rewritten. As rules are sharpened, the output gets more and more convincing, so that the process of rule development is convergent. This is a characteristic of the understanding of a content area, not of blind exercise within it.
- Ah, but Searle would say that such understanding is in the programmer and not in the computer. Well, yes, but what's the issue? Most precisely, the understanding is in the programmer's rule set, which the computer exercises. No one I know of (at Yale, at least) has claimed autonomy for the computer. The computer is not even necessary to the representational theory; it is just very, very convenient and very, very vivid.
- But just suppose that we wanted to claim that the computer itself understood the story content. How could such a claim be defended, given that the computer is merely crunching away on statements in program code and producing other statements in program code which (following translation) are applauded by outside observers as being correct and perhaps even clever. What kind of understanding is that? It is, I would assert, very much the kind of understanding that people display in exposure to new content via language or other symbol systems. When a child learns to add, what does he do except apply rules? Where does "understanding" enter? Is it understanding that the results of addition apply independent of content, so that m + n - p means that if you have m things and you assemble them with n things, then you'll have p things? But that's a rule, too. Is it understanding that the units place can be translated into pennies, the tens place into dimes, and the hundreds place into dollars, so that additions of numbers are isomorphic with additions of money? But that's a rule connecting rule systems. In general, it seems that as more and more rules about a given content are incorporated, especially if they connect with other content domains, we have a sense that understanding is increasing. At what point does a person graduate from "merely" manipulating rules to "really" understanding?
- Educationists would love to know, and so would I, but I would be willing to bet that by the Chinese symbol test, most of the people reading this don't really understand the transcendental number e, or economic inflation, or nuclear power plant safety, or how sailboats can sail upwind. (Be honest with yourself!) Searle's argument itself, sallying forth as it does into a symbol-laden domain that is intrinsically difficult to "understand," could well be seen as mere symbol manipulation. His main rule is that if you see the Chinese symbols for "formal computational operations," then you output the Chinese symbols for "no understanding at all."
- Given the very common exercise in human affairs of linguistic interchange in areas where it is not demonstrable that we know what we are talking about, we might well be humble and give the computer the benefit of the doubt when and if it performs as well as we do. If we credit people with understanding by virtue of their apparently competent verbal performances, we might extend the same courtesy to the machine. It is a conceit, not an insight, to give ourselves more credit for a comparable performance.
- But Searle airily dismisses this "other minds" argument, and still insists that the computer lacks something essential. Chinese symbol rules only go so far, and for him, if you don't have everything, you don't have anything. I should think rather that if you don't have everything, you don't have everything. But in any case, the missing ingredient for Searle is his concept of intentionality. In his paper, he does not justify why this is the key factor. It seems more obvious that what the manipulator of Chinese symbols misses is extensional validity. Not to know that the symbol for "menu" refers to that thing out in the world that you can hold and fold and look at closely is to miss some real understanding of what is meant by menu. I readily acknowledge the importance of such sensorimotor knowledge. The understanding of how a sailboat sails upwind gained through the feel of sail and rudder is certainly valid, and is not the same as a verbal explanation.
- Verbal-conceptual computer programs lacking sensorimotor connection with the world may well miss things. Imagine the following piece of a story: "John told Harry he couldn't find the book. Harry rolled his eyes toward the ceiling." Present common sense inference models can make various predictions about Harry's relation to the book and its unfindability. Perhaps he loaned it to John, and therefore would be upset that it seemed lost. But the unique and nondecomposable meaning of eye rolling is hard for a model to capture except by a clumsy, concrete dictionary entry. A human understander, on the other hand, can imitate Harry's eye roll overtly or in imagination and experience holistically the resigned frustration that Harry must feel. It is important to explore the domain of examples like this.
- But why instead is "intentionality" so important for Searle? If we recite his litany, "hopes, fears, and desires," we don't get the point. A computer or a human certainly need not have hopes or fears about the customer in order to understand a story about a restaurant visit. And inferential use of these concepts is well within the capabilities of computer understanding models. Goal-based inferences, for example, are a standard mechanism in programs of the Yale AI project. Rather, the crucial state of "intentionality" for knowledge is the appreciation of the conditions for its falsification. In what sense does the computer realize that the assertion, "John read the menu" might or might not be true, and that there are ways in the real world to find out?
- Well, Searle has a point there, although I do not see it as the trump card he thinks he is playing. The computer operates in a gullible fashion: it takes every assertion to be true. There are thus certain knowledge problems that have not been considered in artificial intelligence1 programs for language understanding, for example, the question of what to do when a belief about the world is contradicted by data: should the belief be modified, or the data called into question? These questions have been discussed by psychologists in the context of human knowledge-handling proclivities, but the issues are beyond present AI capability. We shall have to see what happens in this area. The naivete of computers about the validity of what we tell them is perhaps touching, but it would hardly seem to justify the total scorn exhibited by Searle. There are many areas of knowledge within which questions of falsifiability are quite secondary - the understanding of literary fiction, for example. Searle has not made convincing his case for the fundamental essentiality of intentionality in understanding. My Chinese symbol processor, at any rate, is not about to output the symbol for "surrender."
Comment:
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)