Full Text- Searle is in a bind. He denies that any Turing test for intelligence1 is adequate - that is, that behaving intelligently is a sufficient condition for being intelligent. But he dare not deny that creatures physiologically very different from people might be intelligent nonetheless - smart green saucer pilots, say. So he needs an intermediate criterion: not so specific to us as to rule out the aliens, yet not so dissociated from specifics as to admit any old object with the right behavior. His suggestion is that only objects (made of stuff) with "the right causal powers" can have intentionality, and hence, only such objects can genuinely understand anything or be intelligent. This suggestion, however, is incompatible with the main argument of his paper.
- Ostensibly, that argument is against the claim that working according to a certain program can ever be sufficient for understanding anything - no matter how cleverly the program is contrived so as to make the relevant object (computer, robot, or whatever) behave as if it understood. The crucial move is replacing the central processor (c.p.u.) with a superfast person - whom we might as well call "Searle's demon." And Searle argues that an English-speaking demon could perfectly well follow a program for simulating a Chinese speaker, without itself understanding a word of Chinese.
- The trouble is that the same strategy will work as well against any specification of "the right causal powers." Instead of manipulating formal tokens according to the specifications of some computer program, the demon will manipulate physical states or variables according to the specification of the "right" causal interactions. Just to be concrete, imagine that the right ones are those powers that our neuron tips have to titillate one another with neurotransmitters. The green aliens can be intelligent, even though they're based on silicon2 chemistry, because their (silicon)3 neurons have the same power of intertitillation. Now imagine covering each of the neurons of a Chinese criminal with a thin coating, which has no effect, except that it is impervious to neurotransmitters. And imagine further that Searle's demon can see the problem, and comes to the rescue; he peers through the coating at each neural tip, determines which transmitter (if any) would have been emitted, and then massages the adjacent tips in a way that has the same effect as if they had received that transmitter. Basically, instead of replacing the c.p.u., the demon is replacing the neurotransmitters.
- By hypothesis, the victim's behavior is unchanged; in particular, she still acts as if she understood Chinese. Now, however, none of her neurons has the right causal powers - the demon has them, and he still understands only English. Therefore, having the right causal powers (even while embedded in a system such that the exercise of these powers leads to "intelligent" behavior) cannot be sufficient for understanding. Needless to say, a corresponding variation will work, whatever the relevant causal powers are.
- None of this should come as a surprise. A computer program just is a specification of the exercise of certain causal powers: the powers to manipulate various formal tokens (physical objects or states of some sort) in certain specified ways, depending on the presence of certain other such tokens. Of course, it is a particular way of specifying causal exercises of a particular sort - that's what gives the "computational paradigm" its distinctive character. But Searle makes no use of this particularity; his argument depends only on the fact that causal powers can be specified independently of whatever it is that has the power. This is precisely what makes it possible to interpose the demon, in both the token-interaction (program) and neuron-interaction cases.
- There is no escape in urging that this is a "dualistic" view of causal powers, not intrinisically connected with "the actual properties" of physical objects. To speak of causal powers in any way that allows for generalization (to green aliens, for example) is ipso facto to abstract from the particulars of any given "realization." The point is independent of the example - it works just as well for photosynthesis. Thus, flesh-colored plant-like organisms on the alien planet might photosynthesize (I take it, in a full and literal sense) so long as they contain some chemical (not necessarily chlorophyll) that absorbs light and uses the energy to make sugar and free oxygen out of carbon dioxide (or silicon4 dioxide?) and water. This is what it means to specify photosynthesis as a causal power, rather than just a property that is, by definition, idiosyncratic to chlorophyll. But now, of course, the demon can enter, replacing both chlorophyll and its alien substitute: he devours photons, and thus energized, makes sugar from CO2 and H2O. It seems to me that the demon is photosynthesizing.
- Let's set aside the demon argument, however. Searle also suggests that "there is no reason to suppose" that understanding (or intentionality) "has anything to do with" computer programs. This too, I think, rests on his failure to recognize that specifying a program is (in a distinctive way) specifying a range of causal powers and interactions.
- The central issue is what differentiates original intentionality from derivative intentionality. The former is intentionality that a thing (system, state, process) has "in its own right"; the latter is intentionality that is "borrowed from" or "conferred by" something else. Thus (on standard assumptions, which I will not question here), the intentionality of conscious thought and perception is original, whereas the intentionality (meaning) of linguistic tokens is merely conferred upon them by language users - that is, words don't have any meaning in and of themselves, but only in virtue of our giving them some. These are paradigm cases; many other cases will fall clearly on one side or the other, or be questionable, or perhaps even marginal. No one denies that if AI systems don't have original intentionality, then they at least have derivative intentionality, in a nontrivial sense - because they have nontrivial interpretations. What Searle objects to is the thesis, held by many, that good-enough AI systems have (or will eventually have) original intentionality.
- Thought tokens, such as articulate beliefs and desires, and linguistic tokens, such as the expressions of articulate beliefs and desires, seem to have a lot in common - as pointed out, for example, by Searle (1979c). In particular, except for the original/derivative distinction, they have (or at least appear to have) closely parallel semantic structures and variations. There must be some other principled distinction between them, then, in virtue of which the former can be originally intentional, but the latter only derivatively so. A conspicuous candidate for this distinction is that thoughts are semantically active, whereas sentence tokens, written out, say, on a page, are semantically inert. Thoughts are constantly interacting with one another and the world, in ways that are semantically appropriate to their intentional content. The causal interactions of written sentence tokens, on the other hand, do not consistently reflect their content (except when they interact with people).
- Thoughts are embodied in a "system" that provides "normal channels" for them to interact with the world, and such that these normal interactions tend to maximize the "fit" between them and the world; that is, via perception, beliefs tend toward the truth; and, via action, the world tends toward what is desired. And there are channels of interaction among thoughts (various kinds of inference) via which the set of them tends to become more coherent, and to contain more consequences of its members. Naturally, other effects introduce aberrations and "noise" into the system; but the normal channels tend to predominate in the long run. There are no comparable channels of interaction for written tokens. In fact, (according to this same standard view), the only semantically sensitive interactions that written tokens ever have are with thoughts; insofar as they tend to express truths, it is because they express beliefs, and insofar as they tend to bring about their own satisfaction conditions, it is because they tend to bring about desires. Thus, the only semantically significant interactions that written tokens have with the world are via thoughts; and this, the suggestion goes, is why their intentionality is derivative.
- The interactions that thoughts have among themselves (within a single "system") are particularly important, for it is in virtue of these that thought can be subtle and indirect, relative to its interactions with the world - that is, not easily fooled or thwarted. Thus, we tend to consider more than the immediately present evidence in making judgments, and more than the immediately present options in making plans. We weigh desiderata, seek further information, try things to see if they'll work, formulate general maxims and laws, estimate results and costs, go to the library, cooperate, manipulate, scheme, test, and reflect on what we're doing. All of these either are or involve a lot of thought-thought interaction, and tend, in the long run to broaden and improve the "fit" between thought and world. And they are typical as manifestations both of intelligence5 and of independence.
- I take it for granted that all of the interactions mentioned are, in some sense, causal - hence, that it is among the system's "causal powers" that it can have (instantiate, realize, produce) thoughts that interact with the world and each other in these ways. It is hard to tell whether these are the sorts of causal powers that Searle has in mind, both because he doesn't say, and because they don't seem terribly similar to photosynthesis and lactation. But, in any case, they strike me as strong candidates for the kinds of powers that would distinguish systems with intentionality - that is, original intentionality - from those without. The reason is that these are the only powers that consistently reflect the distinctively intentional character of the interactors: namely, their "content" or "meaning" (except, so to speak, passively, as in the case of written tokens being read). That is, the power to have states that are semantically active is the "right" causal power for intentionality.
- It is this plausible claim that underlies the thesis that (sufficiently developed) AI systems could actually be intelligent, and have original intentionality. For a case can surely be made that their "representations" are semantically active (or, at least, that they would be if the system were built into a robot). Remember, we are conceding them at least derivative intentionality, so the states in question do have a content, relative to which we can gauge the "semantic appropriateness" of their causal interactions. And the central discovery of all computer technology is that devices can be contrived such that, relative to a certain interpretation, certain of their states will always interact (causally) in semantically appropriate ways, so long as the devices perform as designed electromechanically - that is, these states can have "normal channels" of interaction (with each other and with the world) more or less comparable to those that underlie the semantic activity of thoughts. This point can hardly be denied, so long as it is made in terms of the derivative intentionality of computing systems; but what it seems to add to the archetypical (and "inert") derivative intentionality of, say, written text is, precisely, semantic activity. So, if (sufficiently rich) semantic activity is what distinguishes original from derivative intentionality (in other words, it's the "right" causal power), then it seems that (sufficiently rich) computing systems can have original intentionality.
- Now, like Searle, I am inclined to dispute this conclusion; but for entirely different reasons. I don't believe there is any conceptual confusion in supposing that the right causal powers for original intentionality are the ones that would be captured by specifying a program (that is, a virtual machine). Hence, I don't think the above plausibility argument can be dismissed out of hand ("no reason to suppose," and so on); nor can I imagine being convinced that, no matter how good AI got, it would still be "weak" - that is, would not have created a "real" intelligence6 - because it still proceeded by specifying programs. It seems to me that the interesting question is much more nitty-gritty empirical than that: given that programs might be the right way to express the relevant causal structure, are they in tact so? It is to this question that I expect the answer is no. In other words, I don't much care about Searle's demon working through a program for perfect simulation of a native Chinese speaker - not because there's no such demon, but because there's no such program. Or rather, whether there is such a program, and if not, why not, are, in my view, the important questions.
Comment:
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)