- Searle identifies a weakness in AI methodology that is certainly worth investigating. He points out that by focusing attention at such a high level of cognitive analysis, AI ignores the foundational role that physical properties play in the determination of intentionality. The case may be stated thus: in human beings, the processing of cognized features of the world involves direct physical activity of neural structures and substructures as well as causal interactions between the nervous system and external physical phenomena. When we stipulate a "program" as an explanation (or, minimally, a description) of a cognitive process we abstract information-processing-type elements at some arbitrary level of resolution and we presuppose the constraints and contributions made at lower levels (for example, physical instantiation). AI goes wrong, according to Searle, by forgetting the force of this presupposition and thereby assuming that the computer implementation of the stipulated program will, by itself, display the intentional properties of the original human phenomenon.
- AI doctrine, of course, holds that the lower-level properties are irrelevant to the character of the higher level cognitive processes thus following the grand old tradition inaugurated by Turing (1964) and Putnam (1960).
- If this is in fact the crux of the dispute between Searle and AI, then it is of relatively small philosophical interest. For it amounts to saying nothing more than that there may be important information processes occurring at the intra-neuronal and sub-neuronal levels - a question that can only be decided empirically. If it turns out that such processes do not exist, then current approaches in AI are vindicated; if, on the other hand, Searle's contention is correct, then AI must accommodate lower level processes in its cognitive models. Pragmatically, the simulation of sub-neuronal processes on a scale large enough to be experimentally significant might prove to be impossible (at least with currently envisioned technology). This is all too likely and would, if proven to be the case, spell methodological doom for AI as we now know it. Nevertheless, this would have little philosophical import since the inability to model the interface between complex sub-neural and inter-neuronal processes adequately would constitute a technical, not a theoretical, failure.
- But Searle wants much more than this. He bases his denial of the adequacy of AI models on the belief that the physical properties of neuronal systems are such that they cannot in principle be simulated by a non-protoplasmic computer system. This is where Searle takes refuge in what can only be termed mysticism.
- Searle refers to the privileged properties of protoplasmic neuronal systems as "causal powers." I can discover at least two plausible interpretations of this term, but neither will satisfy Searle's argument. The first reading of "causal power" pertains to the direct linkage of the nervous system to physical phenomena of the external world. For example, when a human being processes visual images, the richness of the internal information results from direct physical interaction with the world. When a computer processes a scene, there need be no actual link between light phenomena in the world and an internal "representation" in the machine. Because the internal "representation" is the result of some stipulated program, one could (and often does in AI) input the "representation" by hand, that is, without any physical, visual apparatus. In such a case, the causal link between states of the world and internal states of the machine is merely stipulated. Going one step further, we can argue that without such a causal link, the internal states cannot be viewed as cognitive states since they lack any programmer-independent semantic content. AI workers might try to remedy the situation by introducing appropriate sensory transducers and effector mechanisms (such as "hands") into their systems, but I suspect that Searle could still press his point by arguing that the causal powers of such a system would still fail to mirror the precise causal powers of the human nervous system. The suppressed premise that Searle is trading on, however, is that nothing but a system that shared the physical properties of our systems would display precisely the same sort of causal links.
- Yet if the causality with which Searle is concerned involves nothing more than direct connectivity between internal processes and sensorimotor states, it would seem that he is really talking about functional properties, not physical ones. He cannot make his case that a photo-electric cell is incapable of capturing the same sort of information as an organic rod or cone in a human retina unless he can specifically identify a (principled) deficiency of the former with respect to the latter. And this he does not do. We may sum up by saying that "causal powers," in this interpretation, does presuppose embodiment but that no particular physical makeup for a body is demanded. Connecting actual sensorimotor mechanisms to a perceptron-like internal processor should, therefore, satisfy causality requirements of this sort (by removing the stipulational character of the internal states).
- Under the second interpretation, the term "causal powers" refers to the capacities of protoplasmic neurons to produce phenomenal states, such as felt sensations, pains, and the like. Here, Searle argues that things like automobiles and typewriters, because of their inorganic physical composition, are categorically incapable of causing felt sensations, and that this aspect of consciousness is crucial to intentionality.
- There are two responses to this claim. First, arguing with Dennett, Schank, and others, we might say that Searle is mistaken in his view that intentionality necessarily requires felt sensations, that in fact the functional components of sensations are all that is required for a cognitive model. But, even if we accept Searle's account of intentionality, the claim stills seems to be untenable. The mere fact that mental phenomena such as felt sensations have been, historically speaking, confined to protoplasmic organisms in no way demonstrates that such phenomena could not arise in a non-protoplasmic system. Such an assertion is on a par with a claim (made in antiquity) that only organic creatures such as birds or insects could fly. Searle explicitly and repeatedly announces that intentionality "is a biological phenomenon," but he never explains what sort of biological phenomenon it is, nor does he ever give us a reason to believe that there is a property or set of properties inherent in protoplasmic neural matter that could not, in principle, be replicated1 in an alternative physical substrate.
- One can only conclude that the knowledge of the necessary connection between intentionality and protoplasmic embodiment is obtained through some sort of mystical revelation. This, of course, shouldn't be too troublesome to AI researchers who, after all, trade on mysticism as much as anyone in cognitive science does these days. And so it goes.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)