- Searle's argument depends for its force on intuitions that certain entities do not think. There are two simple objections to his argument that are based on general considerations about what can be shown by intuitions that something can't think.
- First, we are willing, and rightly so, to accept counterintuitive consequences of claims for which we have substantial evidence. It once seemed intuitively absurd to assert that the earth was whirling through space at breakneck speed, but in the face of the evidence for the Copernican view, such an intuition should be (and eventually was) rejected as irrelevant to the truth of the matter. More relevantly, a grapefruit-sized head-enclosed blob of gray protoplasm seems, at least at first blush, a most implausible seat of mentality. But if your intuitions still balk at brains as seats of mentality, you should ignore your intuitions as irrelevant to the truth of the matter, given the remarkable evidence for the role of the brain in our mental life. Searle presents some alleged counterintuitive consequences of the view of cognition as formal symbol manipulation. But his argument does not even have the right form, for in order to know whether we should reject the doctrine because of its alleged counterintuitive consequences, we must know what sort of evidence there is in favor of the doctrine. If the evidence for the doctrine is overwhelming, then incompatible intuitions should be ignored, just as should intuitions that the brain couldn't be the seat of mentality. So Searle's argument has a missing premise to the effect that the evidence isn't sufficient to overrule the intuitions.
- Well, is such a missing premise true? I think that anyone who takes a good undergraduate cognitive psychology course would see enough evidence to justify tentatively disregarding intuitions of the sort that Searle appeals to. Many theories in the tradition of thinking as formal symbol manipulation have a moderate (though admittedly not overwhelming) degree of empirical support.
- A second point against Searle has to do with another aspect of the logic of appeals to intuition. At best, intuition reveals facts about our concepts (at worst, facts about a motley of factors such as our prejudices, ignorance, and, still worse, our lack of imagination - as when people accepted the deliverance of intuition that two straight lines cannot cross twice). So even if we were to accept Searle's appeal to intuitions as showing that homunculus heads that formally manipulate symbols do not think, what this would show is that our formal symbol-manipulation theories do not provide a sufficient condition for the application of our ordinary intentional concepts. The more interesting issue, however, is whether the homunculus head's formal symbol manipulation falls in the same scientific natural kind (see Putnam 1975a) as our intentional processes. If so, then the homunculus head does think in a reasonable scientific sense of the term - and so much the worse for the ordinary concept. Moreover, if we are very concerned with ordinary intentional concepts, we can give sufficient conditions for their application by building in ad hoc conditions designed to rule out the putative counterexamples. A first stab (inadequate, but improvable - see Putnam 1975b, p. 435; Block 1978, p. 292) would be to add the condition that in order to think, realizations of the symbol-manipulating system must not have operations mediated by entities that themselves have symbol manipulation typical of intentional systems. The ad hocness of such a condition is not an objection to it, given that what we are trying to do is "reconstruct" an everyday concept out of a scientific one; we can expect the everyday concept to be scientifically characterizable only in an unnatural way. (See Fodor's commentary on Searle, this issue.) Finally, there is good reason for thinking that the Putnam-Kripke account of the semantics1 of "thought" and other intentional terms is correct. If so, and if the formal symbol manipulation of the homunculus head falls in the same natural kind as our cognitive processes, then the homunculus head does think, in the ordinary sense as well as in the scientific sense of the term.
- The upshot of both these points is that the real crux of the debate rests on a matter that Searle does not so much as mention: what the evidence is for the formal symbol-manipulation point of view.
- Recall that Searle's target is the doctrine that cognition is formal symbol manipulation, that is, manipulation of representations by mechanisms that take account only of the forms (shapes) of the representations. Formal symbol-manipulation theories of cognition postulate a variety of mechanisms that generate, transform, and compare representations. Once one sees this doctrine as Searle's real target, one can simply ignore his objections to Schank. The idea that a machine programmed a la Schank has anything akin to mentality is not worth taking seriously, and casts as much doubt on the symbol-manipulation theory of thought as Hitler casts on doctrine favoring a strong executive branch of government. Any plausibility attaching to the idea that a Schank machine thinks would seem to derive from a crude Turing test version of behaviourism2 that is anathema to most who view cognition as formal symbol manipulation.
- Consider a robot akin to the one sketched in Searle's reply II (omitting features that have to do with his criticism of Schank). It simulates your input-output behavior by using a formal symbol-manipulation theory of the sort just sketched of your cognitive processes (together with a theory of your noncognitive mental processes, a qualification omitted from now on). Its body is like yours except that instead of a brain it has a computer equipped with a cognitive theory true of you. You receive an input: "Who is your favorite philosopher?" You cogitate a bit and reply "Heraclitus." If your robot doppelganger receives the same input, a mechansim converts the input into a description of the input. The computer uses its description of your cognitive mechanisms to deduce a description of the product of your cogitation. This description is then transmitted to a device that transforms the description into the noise "Heraclitus."
- While the robot just described behaves just as you would given any input, it is not obvious that it has any mental states. You cogitate in response to the question, but what goes on in the robot is manipulation of descriptions of your cogitation so as to produce the same response. It isn't obvious that the manipulation of descriptions of cogitation in this way is itself cogitation.
- My intuitions agree with Searle about this kind of case (see Block, forthcoming), but I have encountered little agreement on the matter. In the absence of widely shared intuition, I ask the reader to pretend to have Searle's and my intuition on this question. Now I ask another favor, one that should be firmly distinguished from the first: take the leap from intuition to fact (a leap that, as I argued in the first four paragraphs of this commentary, Searle gives us no reason to take). Suppose, for the sake of argument, that the robot described above does not in fact have intentional states.
- What I want to point out is that even if we grant Searle all this, the doctrine that cognition is formal symbol manipulation remains utterly unscathed. For it is no part of the symbol-manipulation view of cognition that the kind of manipulation attributed to descriptions of our symbol-manipulating cognitive processes is itself a cognitive process. Those who believe formal symbol-manipulation theories of intentionality must assign intentionality to anything of which the theories are true, but the theories cannot be expected to be true of devices that use them to mimic beings of which they are true.
- Thus far, I have pointed out that intuitions that Searle's sort of homunculus head does not think do not challenge the doctrine that thinking is formal symbol manipulation. But a variant of Searle's example, similar to his in its intuitive force, but that avoids the criticism I just sketched, can be described.
- Recall that it is the aim of cognitive psychology to decompose mental processes into combinations of processes in which mechanisms generate representations, other mechanisms transform representations, and still other mechanisms compare representations, issuing reports to still other mechanisms, the whole network being appropriately connected to sensory input transducers and motor output devices. The goal of such theorizing is to decompose these processes to the point at which the mechanisms that carry out the operations have no internal goings on that are themselves decomposable into symbol manipulation by still further mechanisms. Such ultimate mechanisms are described as "primitive," and are often pictured in flow diagrams as "black boxes" whose realization is a matter of "hardware" and whose operation is to be explained by the physical sciences, not psychology. (See Fodor 1968; 1980; Dennet 1975)
- Now consider an ideally completed theory along these lines, a theory of your cognitive mechanisms. Imagine a robot whose body is like yours, but whose head contains an army of homunculi, one for each black box. Each homunculus does the symbol-manipulating job of the black box he replaces, transmitting his "output" to other homunculi by telephone in accordance with the cognitive theory. This homunculi head is just a variant of one that Searle uses, and it completely avoids the criticism I sketched above, because the cognitive theory it implements is actually true of it. Call this robot the cognitive homunculi head. (The cognitive homunculi head is discussed in more detail in Block 1978, pp. 305-10.) I shall argue that even if you have the intuition that the cognitive homunculi head has no intentionality, you should not regard this intuition as casting doubt on the truth of symbol-manipulation theories of thought.
- One line of argument against the cognitive homunculi head is that its persuasive power may be due to a "not seeing the forest for the trees" illusion (see Lycan's commentary, this issue, and Lycan, forthcoming). Another point is that brute untutored intuition tends to balk at assigning intentionality to any physical system, including Searle's beloved brains. Does Searle really think that it is an initially congenial idea that a hunk of gray jelly is the seat of his intentionality? (Could one imagine a less likely candidate?) What makes gray jelly so intuitively satisfying to Searle is obviously his knowledge that brains are the seat of our intentionality. But here we see the difficulty in relying on considered intuitions, namely that they depend on our beliefs, and among the beliefs most likely to play a role in the case at hand are precisely our doctrines about whether the formal symbol-manipulation theory of thinking is true or false.
- Let me illustrate this and another point via another example (Block 1978, p. 291). Suppose there is a part of the universe that contains matter that is infinitely divisible. In that part of the universe, there are intelligent creatures much smaller than our elementary particles who decide to devote the next few hundred years to creating out of their matter substances with the chemical and physical characteristics (except at the sub-elementary particle level) of our elements. The build hordes of space ships of different varieties about the sizes of our electrons, protons, and other elementary particles, and fly the ships in such a way as to mimic the behavior of these elementary particles. The ships contain apparatus to produce and detect the type of radiation elementary particles give off. They do this to produce huge (by our standards) masses of substances with the chemical and physical characteristics of oxygen, carbon, and other elements. You go off on an expedition to that part of the universe, and discover the "oxygen" and "carbon." Unaware of its real nature, you set up a colony, using these "elements" to grow plants for food, provide "air" to breathe, and so on. Since one's molecules are constantly being exchanged with the environment, you and other colonizers come to be composed mainly of the "matter" made of the tiny people in space ships.
- If any intuitions about homunculi heads are clear, it is clear that coming to be made of the homunculi-infested matter would not affect your mentality. Thus we see that intuition need not balk at assigning intentionality to a being whose intentionality owes crucially to the actions of internal homunculi. Why is it so obvious that coming to be made of homunculi-infested matter would not affect our sapience or sentience? I submit that it is because we have all absorbed enough neurophysiology to know that changes in particles in the brain that do not affect the brain's basic (electrochemical) mechanisms do not affect mentality.
- Our intuitions about the mentality of homunculi heads are obviously influenced (if not determined) by what we believe. If so, then the burden of proof lies with Searle to show that the intuition that the cognitive homunculi head has no intentionality (an intuition that I and many others do not share) is not due to doctrine hostile to the symbol-manipulation account of intentionality.
- In sum, an argument such as Searle's requires a careful examination of the source of the intuition that the argument depends on, an examination Searle does not begin.
- Acknowledgment: I am grateful to Jerry Fodor and Georges Rey for comments on an earlier draft.
- While the crude version of behaviorism is refuted by well-known arguments, there is a more sophisticated version that avoids them; however, it can be refuted using an example akin to the one Searle uses against Schank.
- Such an example is sketched in Block 1978, p. 294, and elaborated in Block, forthcoming.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)