- Extensive use of intentional idioms is now common in discussions of the capabilities and functioning of AI systems. Often these descriptions are to be taken no more substantively than in much of ordinary programming where one might say, for example, that a statistical regression program "wants" to minimize the sum of squared deviations or "believes" it has found a best-fitting function when it has done so. In other cases, the intentional account is meant to be taken more literally. This practice requires at least some commitment to the claim that intentional states can be achieved in a machine just in virtue of its performing certain computations. Searle's article serves as a cogent and timely indicator of some of the pitfalls that attend such a claim.
- If certain AI systems are to possess intentionality, while other computational systems do not, then it ought to be in virtue of some set of purely computational principles. However, as Searle points out, no such principles have yet been forthcoming from AI. Moreover, there is reason to believe that they never will be. A sketch of one sort of argument is as follows: intentional states are, by definition, "directed at" objects and states of affairs in the world. Hence the first requirement for any theory about them would be to specify the relation between the states and the world they are "about." However it is precisely this relation that is not part of the computational account of mental states (cf. Fodor 1980). A computational system can be interfaced with an external environment in any way a human user may choose. There is no dependence of this relation on any ontogenetic or phylogenetic history of interaction with the environment. In fact, the relation between system and environment can be anything at all without affecting the computations performed on symbols that purportedly refer to it. This fact casts considerable doubt on whether any purely computational theory of intentionality is possible.
- Searle attempts to establish an even stronger conclusion: his argument is that the computational realization of intentional states is, in fact, impossible on a priori grounds. The argument is based on a "simulation game" - a kind of dual of Turing's imitation game - in which man mimics computer. In the simulation game, a human agent instantiates a computer program by performing purely syntactic operations on meaningless symbols. The point of the demonstration is that merely following rules for the performance of such operations is not sufficient for manifesting the right sort of intentionality. In particular, a given set of rules could create an effective mimicking of some intelligent activity without bringing the rule-following agent any closer to having intentional states pertaining to the domain in question.
- One difficulty with this argument is that it does not distinguish between two fundamentally different ways of instantiating a computer program or other explicit procedure in a physical system. One way is to imbed the program in a system that is already capable of interpreting and following rules. This requires that the procedure be expressed in a "language" that the imbedding system can already "understand." A second way is to instantiate the program directly by realizing its "rules" as primitive hardware operations. In this case a rule is followed, not by "interpreting" it, but by just running off whatever procedure the rule denotes. Searle' simulation game is germane to the first kind of instantiation but not the second. Following rules in natural language (as the simulation game requires) involves the mediation of other intentional states and so is necessarily an instance of indirect instantiation. To mimic a direct instantiation of a program faithfully, on the other hand, the relevant primitives would have to be realized non-mediately in one's own activity. If such mimicry were possible, it would be done only at the cost of being unable to report on the system's lack of intentional states, if in fact it had none.
- The distinction between directly and indirectly instantiating computational procedures is important because both kinds of processes are required to specify a computational system completely. The first comprises its architecture or set of primitives, and the second comprises the algorithms the system can apply (Newell 1973; 1980). Hence Searle's argument is a challenge to strong AI when that view is put forward in terms of the capabilities of programs, but not when it is framed (as, for example, by Pylyshyn 1980a) in terms of computational systems. The claim that the latter cannot have intentional states must therefore proceed along different lines. The approach considered earlier, for example, called attention to the arbitrary relation between computational symbol and referent. Elsewhere the argument has been put forward in more detail that it is an overly restrictive notion of symbol that creates the most serious difficulties for the computational theory (Kolers & Smythe 1979; Smythe 1979). The notion of an independent token subject to only formal syntactic manipulation is neither a sufficient characterization of what a symbol is, nor well motivated in the domain of human cognition. Sound though this argument is, it is not the sort of secure conclusion that Searle's simulation game tries to demonstrate.
- However, the simulation game does shed some light on another issue. Why is it that the belief is so pervasive that AI systems are truly constitutive of mental events? One answer is that many people seem to be playing a different version of the simulation game from the one that Searle recommends. The symbols of most AI and cognitive simulation systems are rarely the kind of meaningless tokens that Searle's simulation game requires. Rather, they are often externalized in forms that carry a good deal of surplus meaning to the user, over and above their procedural identity in the system itself, as pictorial and linguistic inscriptions, for example. This sort of realization of the symbols can lead to serious theoretical problems. For example, systems like that of Kosslyn and Shwartz (1977) give the appearance of operating on mental images largely because their internal representations "look" like images when displayed on a cathode ray tube. It is unclear that the system could be said to manipulate images in any other sense. There is a similar problem with language understanding systems. The semantics of such systems is often assessed by means of an informal procedure that Hayes (1977, p. 559) calls "pretend-it's-English." That is, misleading conclusions about the capabilities of these systems can result from the superficial resemblance of their internal representations to statements in natural language. An important virute of Searle's argument is that it specifies how to play the simulation game correctly. The procedural realization of the symbols is all that should matter in a computational theory; their external appearance ought to be irrelevant.
- The game, played this way, may not firmly establish that computational systems lack intentionality. However, it at least undermines one powerful tacit motivation for supposing they have it.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2019
- Mauve: Text by correspondent(s) or other author(s); © the author(s)