- What kind of stuff can refer? Searle would have us believe that computers, qua formal symbol manipulators, necessarily lack the quality of intentionality, or the capacity to understand and to refer, because they have different "causal powers" from us. Although just what having different causal powers amounts to (other than not being capable of intentionality) is not spelled out, it appears at least that systems that are functionally identical need not have the same "causal powers." Thus the relation of equivalence with respect to causal powers is a refinement of the relation of equivalence with respect to function. What Searle wants to claim is that only systems that are equivalent to humans in this stronger sense can have intentionality. His thesis thus hangs on the assumption that intentionality is tied very closely to specific material properties - indeed, that it is literally caused by them. From that point of view it would be extremely unlikely that any system not made of protoplasm - or something essentially identical to protoplasm - can have intentionality. Thus if more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.
- Searle presents a variety of seductive metaphors and appeals to intuition in support of this rather astonishing view. For example, he asks: why should we find the view that intentionality is tied to detailed properties of the material composition of the system so surprising, when we so readily accept the parallel claim in the case of lactation? Surely it's obvious that only a system with certain causal powers can produce milk; but then why should the same not be true of the ability to refer? Why this example should strike Searle as even remotely relevant is not clear, however. The product of lactation is a substance, milk, whose essential defining properties are, naturally, physical and chemical ones (although nothing prevents the production of synthetic milk using a process that is materially very different from mammalian lactation). Is Searle then proposing that intentionality is a substance secreted by the brain, and that a possible test for intentionality might involve, say, titrating the brain tissue that realized some putative mental episodes?
- Similarly, Searle says that it's obvious that merely having a program can't possibly be a sufficient condition for intentionality since you can implement that program on a Turing machine made out of "a roll of toilet paper and a pile of small stones." Such a machine would not have intentionality because such objects "are the wrong kind of stuff to have intentionality in the first place." But what is the right kind of stuff? Is it cell assemblies, individual neurons, protoplasm, protein molecules, atoms of carbon and hydrogen, elementary particles? Let Searle name the level, and it can be simulated perfectly well using "the wrong kind of stuff." Clearly it isn't the stuff that has the intentionality. Your brain cells don't refer any more than do the water pipes, bits of paper, computer operations, or the homunculus in the Chinese room examples. Searle presents no argument for the assumption that what makes the difference between being able to refer and not being able to refer or to display any other capacity - is a "finer grained" property of the system than can be captured in a functional description. Furthermore, it's obvious from Searle's own argument that the nature of the stuff cannot be what is relevant, since the monolingual English speaker who has memorized the formal rules is supposed to be an example of a system made of the right stuff and yet it allegedly still lacks the relevant intentionality.
- Having said all this, however, one might still want to maintain that in some cases - perhaps in the case of Searle's example - it might be appropriate to say that nothing refers, or that the symbols are not being used in a way that refers to something. But if we wanted to deny that these symbols referred, it would be appropriate to ask what licences us ever to say that a symbol refers. There are at least three different approaches to answering that question: Searle's view that it is the nature of the embodiment of the symbol (of the brain substance itself), the traditional functionalist view that it is the functional role that the symbol plays in the overall behavior of the system, and the view associated with philosophers like Kripke and Putnam, that it is in the nature of the causal connection that the symbol has with certain past events. The latter two are in fact compatible insofar as specifying the functional role of a symbol in the behavior of a system does not preclude specifying its causal interactions with an environment. It is noteworthy that Searle does not even consider the possibility that a purely formal computational model might constitute an essential part of an adequate theory, where the latter also contained an account of the system's transducers and an account of how the symbols came to acquire the role that they have in the functioning of the system.
- Functionalism and reference. The functionalist view is currently the dominant one in both AI and information-processing psychology. In the past, mentalism often assumed that reference was established by relations of similarity; an image referred to a horse if it looked sufficiently like a horse. Mediational behaviorism took it to be a simple causal remnant of perception: a brain event referred to a certain object if it shared some of the properties of brain events that occur when that object is perceived. But information-processing psychology has opted for a level of description that deals with the informational, or encoded, aspects of the environment's effects on the organism. On this view it has typically been assumed that what a symbol represents can be seen by examining how the symbol enters into relations with other symbols and with transducers. It is this position that Searle is quite specifically challenging. My own view is that although Searle is right in pointing out that some versions of the functionalist answer are in a certain sense incomplete, he is off the mark both in his diagnosis of where the problem lies and in his prognosis as to just how impoverished a view of mental functioning the cognitivist position will have to settle for (that is, his "weak AI").
- The sense in which a functionalist answer might be incomplete is if it failed to take the further step of specifying what it was about the system that warranted the ascription of one particular semantic content to the functional states (or to the symbolic expressions that express that state) rather than some other logically possible content. A cognitive theory claims that the system behaves in a certain way because certain expressions represent certain things (that is, have a certain semantic interpretation). It is, furthermore, essential that it do so: otherwise we would not be able to subsume certain classes of regular behaviors in a single generalization of the sort "the system does X because the state S represents such and such" (for example, the person ran out of the building because he believed that it was on fire). (For a discussion of this issue, see Pylyshyn 1980b.) But the particular interpretation appears to be extrinsic to the theory inasmuch as the system would behave in exactly the same way without the interpretation. Thus Searle concludes that it is only we, the theorists, who take the expression to represent, say, that the building is on fire. The system doesn't take it to represent anything because it literally doesn't know what the expression refers to: only we theorists do. That being the case, the system can't be said to behave in a certain way because of what it represents. This is in contrast with the way in which our behavior is determined: we do behave in certain ways because of what our thoughts are about. And that, according to Searle, adds up to weak AI; that is, a functionalist account in which formal analogues "stands in" for, but themselves neither have, nor explain, mental contents.
- The last few steps, however, are non sequiturs. The fact that it was we, the theorists, who provided the interpretation of the expressions doesn't by itself mean that such an interpretation is simply a matter of convenience, or that there is a sense in which the interpretation is ours rather than the system's. Of course it's logically possible that the interpretation is only in the mind of the theorist and that the system behaves the way it does for entirely different reasons. But even if that happened to be true, it wouldn't follow simply from the fact that the AI theorist was the one who came up with the interpretation. Much depends on his reasons for coming up with that interpretation. In any case, the question of whether the semantic interpretation resides in the head of the programmer or in the machine is the wrong question to ask. A more relevant question would be: what fixes the semantic interpretation of functional states, or what latitude does the theorist have in assigning a semantic interpretation to the states of the system?
- When a computer is viewed as a self-contained device for processing formal symbols, we have a great deal of latitude in assigning semantic interpretations to states. Indeed, we routinely change our interpretation of the computer's functional states, sometimes viewing them as numbers, sometimes as alphabetic characters, sometimes as words or descriptions of a scene, and so on. Even where it is difficult to think of a coherent interpretation that is different from the one the programmer had in mind, such alternatives are always possible in principle. However, if we equip the machine with transducers and allow it to interact freely with both natural and linguistic environments, and if we endow it with the power to make (syntactically specified) inferences, it is anything but obvious what latitude, if any, the theorist (who knows how the transducers operate, and therefore knows what they respond to) would still have in assigning a coherent interpretation to the functional states in such a way as to capture psychologically relevant regularities in behavior.
- The role of intuitions. Suppose such connections between the system and the world as mentioned above (and possibly other considerations that no one has yet considered) uniquely constrained the possible interpretations that could be placed on representational states. Would this solve the problem of justifying the ascription of particular semantic contents to these states? Here I suspect that one would run into differences of opinion that may well be unresolvable, simply because they are grounded on different intuitions. For example there immediately arises the question of whether we possess a privileged interpretation of our own thoughts that must take precedence over such functional analyses. And if so, then there is the further question of whether being conscious is what provides the privileged access; and hence the question of what one is to do about the apparent necessity of positing unconscious mental processes. So far as I can see the only thing that recommends that particular view is the intuition that, whatever may be true of other creatures, I at least know what my thoughts refer to because I have direct experiential access to the referents of my thoughts. Even if we did have strong intuitions about such cases, there is good reason to believe that such intuitions should be considered as no more than secondary sources of constraint, whose validity should be judged by how well theoretical systems based on them perform. We cannot take as sacred anyone's intuitions about such things as whether another creature has intentionality - especially when such intuitions rest (as Searle's do, by his own admission) on knowing what the creature (or machine) is made of (for instance, Searle is prepared to admit that other creatures might have intentionality if "we can see that the beasts are made of similar stuff to ourselves"). Clearly, intuitions based on nothing but such anthropocentric chauvinism cannot form the foundation of a science of cognition [See "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978].
- A major problem in science - especially in a developing science like cognitive psychology - is to decide what sorts of phenomena "go together," in the sense that they will admit of a uniform set of explanatory principles. Information-processing theories have achieved some success in accounting for aspects of problem solving, language processing, perception, and so on, by deliberately glossing over the conscious-unconscious distinction; by grouping under common principles a wide range of rule-governed processes necessary to account for functioning, independent of whether or not people are aware of them. These theories have also placed to one side questions as to what constitute consciously experienced qualia or "raw feels" dealing only with some of their reliable functional correlates (such as the belief that one is in pain, as opposed to the experience of the pain itself) - and they have to a large extent deliberately avoided the question of what gives symbols their semantics. Because AI has chosen to carve up phenomena in this way, people like Searle are led to conclude that what is being done is weak AI - or the modelling of the abstract functional structure of the brain without regard for what its states represent. Yet there is no reason to think that this program does not in fact lead to strong AI in the end. There is no reason to doubt that at asymptote (for example, when and if a robot is built) the ascription of intentionality to programmed machines will be just as warranted as its ascription to people, and for reasons that have absolutely nothing to do with the issue of consciousness.
- What is frequently neglected in discussions of intentionality is that we cannot state with any degree of precision what it is that entitles us to claim that people refer (though there are one or two general ideas, such as those discussed above), and therefore that arguments against the intentionality of computers typically reduce to "argument from ignorance." If we knew what it was that warranted our saying that people refer, we might also be in a position to claim that the ascription of semantic content to formal computational expressions - though it is in fact accomplished in practice by "inference to the best explanation" - was in the end warranted in exactly the same way. Humility, if nothing else, should prompt us to admit that there's a lot we don't know about how we ought to describe the capacities of future robots and other computing machines, even when we do know how their electronic circuits operate.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)