- Most versions of philosophical behaviorism have had the consequence that if an organism or device D passes the Turing test, in the sense of systematically manifesting all the same outward behavioral dispositions that a normal human does, the Dhas all the same sorts of contentful or intentional states that humans do. In light of fairly obvious counterexamples to this thesis, materialist1 philosophers of mind have by and large rejected behaviorism in favor of a more species-chauvinistic view: D's manifesting all the same sorts of behavioral dispositions we do does not alone suffice for D's having intentional states; it is necessary in addition that D produce behavior from stimuli in roughly the way that we do - that D's inner functional organization be not unlike ours and that D process the stimulus input by analogous inner procedures. On this "functionalist" theory, to be in a mental state of such and such a kind is to incorporate a functional component or system of components of type so and so which is in a certain distinctive state of its own. "Functional components" are individuated according to the roles they play within their owners' overall functional organization2.
- Searle offers a number of cases of entities that manifest the behavioral dispositions we associate with intentional states but that * rather plainly do not have any such states3. I accept his intuitive judgments about most of these cases. Searle plus rule book plus pencil and paper presumably does not understand Chinese, nor does Searle with memorized rule book or Searle with TV camera or the robot with Searle inside. Neither my stomach nor Searle's liver nor a thermostat nor a light switch has beliefs and desires. But none of these cases is a counterexample to the functionalist hypothesis. The systems in the former group are pretty obviously not functionally isomorphic at the relevant level to human beings who do understand Chinese; a native Chinese carrying on a conversation is implementing procedures of his own, not those procedures that would occur in a mockup containing the cynical, English-speaking, American-acculturated homuncular Searle. Therefore they are not counterexamples to a functionalist theory of language understanding, and accordingly they leave it open that a computer that was functionally isomorphic to a real Chinese speaker would indeed understand Chinese also. Stomachs, thermostats, and the like, because of their brutish simplicity, are even more clearly dissimilar to humans. (The same presumably is true of Schank's existing language-understanding programs.)
- I have hopes for a sophisticated version of the "brain simulator" (or the "combination" machine) that Searle illustrates with his plumbing example. Imagine a hydraulic system of this type that does replicate4, perhaps not the precise neuroanatomy of a Chinese speaker, but all that is relevant of the Chinese speaker's higher functional organization; individual water pipes are grouped into organ systems precisely analogous to those found in the speaker's brain, and the device processes linguistic input in just the way that the speaker does. (It does not merely simulate or describe this processing.) Moreover, the system is automatic and does all this without the intervention of Searle or any other deus in machina. Under these conditions and given a suitable social context, I think it would be plausible to accept the functionalist consequence that the hydraulic system does understand Chinese.
- Searle's paper suggest two objections to this claim. First, "where is the understanding in this system?" All Searle sees is pipes and valves and flowing water. Reply: Looking around the fine detail of the system's hardware, you are too small to see that the system is understanding Chinese sentences. If you were a tiny, cell-sized observer inside a real Chinese speaker's brain, all you would see would be neurons stupidly, mechanically transmitting electrical charge, and in the same tone you would ask, "Where is the understanding in this system?" But you would be wrong in concluding that the system you were observing did not understand Chinese; in like manner you may well be wrong about the hydraulic device5.
- Second, even if a computer were to replicate6 all of the Chinese speaker's relevant functional organization, all the computer is really doing is performing computational operations on formally specified elements. A purely formally or syntactically characterized element has no meaning or content in itself, obviously, and no amount of mindless syntactic manipulation of it will endow it with any. Reply: The premise is correct, and I agree it shows that no computer has or could have intentional states merely in virtue of performing syntactic operations on formally characterized elements. But that does not suffice to prove that no computer can have intentional states at all. Our brain states do not have the contents they do just in virtue of having their purely formal properties either7; a brain state described "syntactically" has no meaning or content on its own. In virtue of what, then, do brain states (or mental states however construed) have the meanings they do? Recent theory advises that the content of a mental representation is not determined within its owner's head (Putnam 1975a; Fodor 1980); rather, it is determined in part by the objects in the environment that actually figure in the representation's etiology and in part by social and contextual factors of several other sorts (Stich, in preparation). Now, present-day computers live in highly artificial and stifling environments. They receive carefully and tendentiously preselected input; their software is adventitiously manipulated by uncaring programmers; and they are isolated in laboratories and offices, deprived of any normal interaction within a natural or appropriate social setting8. For this reason and several others, Searle is surely right in saying that present-day computers do not really have the intentional states that we fancifully incline toward attributing to them. But nothing Searle has said impugns the thesis that if a sophisticated future computer not only replicated9 human functional organization but harbored its inner representations as a result of the right sort of causal history and had also been nurtured within a favorable social setting, we might correctly ascribe intentional states to it. This point may or may not afford lasting comfort to the AI community.
- This characterization is necessarily crude and vague. For a very useful survey of different versions of functionalism and their respective foibles, see Block (1978); I have developed and defended what I think is the most promising version of functionalism in Lycan (forthcoming).
- For further discussion of cases of this kind, see Block (forthcoming).
- A much expanded version of this reply appears in section 4 of Lycan (forthcoming).
- I do not understand Searle's positive suggestion as to the source of intentionality in our own brains. What "neurobiological causal properties"?
- As Fodor (forthcoming) remarks, SHRDLU as we interpret him is the victim of a Cartesian evil demon; the "blocks" he manipulates do not exist in reality.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)