- It is a rare and pleasant privilege to comment on an article that surely is destined to become, almost immediately, a classic. But, alas, what comments are called for? Following BBS instructions, I'll resist the very strong temptation to explain how Searle makes exactly the right central points and supports them with exactly the right arguments; and I shall leave it to those who, for one reason or another, still disagree with his central contentions to call attention to a few possible weaknesses, perhaps even a mistake or two, in the treatment of some of his ancillary claims. What I shall try to do, is to examine, briefly - and therefore sketchily and inadequately - what seem to me to be some implications of his results for the overall mind-body problem.
- Quite prudently, in view of the brevity of his paper, Searle leaves some central issues concerning mind-brain relations virtually untouched. In particular, his main thrust seems compatible with interactionism, with epiphenomenalism, and with at least some versions of the identity thesis. It does count, very heavily, against eliminative materialism, and, equally importantly, it reveals "functionalism" (or "functional materialism") as it is commonly held and interpreted (by, for example, Hilary Putnam and David Lewis) to be just another variety of eliminative materialism (protestations to the contrary notwithstanding). Searle correctly notes that functionalism of this kind (and strong AI, in general) is a kind of dualism. But it is not a mental-physical dualism; it is a form-content dualism, one, moreover, in which the form is the thing and content doesn't matter! (See Fodor: "Methodological Solipsism" BBS 3(1) 1980.]
- Now I must admit that in order to find these implications in Searle's results I have read into them a little more than they contain explicity. Specifically, I have assumed that intentional states are genuinely mental in the what-is-it-like-to-be-a-bat? sense of "mental" (Nagel 1974) as well as, what I suppose is obvious, that eliminative materialism seeks to "eliminate" the genuinely mental in this sense. But it seems to me that it does not take much reading between the lines to see that Searle is sympathetic to my assumptions. For example, he does speak of "genuinely mental Isystems]," and he says (in Searle 1979c) that he believes that "only beings capable of conscious states are capable of Intentional states" (my italics), although he says that he does not know how to demonstrate this. (How, indeed, could anyone demonstrate such a thing? How could one demonstrate that fire burns?)
- The argument that Searle gives for the conclusion that only machines can think (can have intentional states) appears to have two suppressed premisses: (1) intentional states must always be causally produced, and (2) any causal network (with a certain amount of organization and completeness, or some such condition) is a machine. I accept for the purposes of this commentary his premises and his conclusion. Next I want to ask: what kind of hardware must a thinking machine incorporate? (By "thinking machine" I mean of course a machine that has genuinely mental thoughts: such a machine, I contend, will also have genuinely mental states or events instantiating sensations, emotions, and so on in all of their subjective, qualitative, conscious, experiential richness.) To continue this line of investigation, I want to employ an "event ontology," discarding substance metaphysics altogether. (Maxwell 1978, provides a sketch of some of the details and of the contentions that contemporary physics, quite independently of philosophy of mind, leads to such an ontology.) An event is (something like) the instancing of a property or the instancing (concrete realization) of a state. A causal network, then, consists entirely of a group of events and the causal links that interconnect them. A fortiori, our "machine" will consist entirely of events and causal connections. In other words, the hardware of this machine (or of any machine, for example, a refrigerator) consists of its constituent events and the machine consists of nothing else (except the causal linkages). Our thinking machine in the only form we know it today is always a brain (or if you prefer, an entire human or other animal body), which, as we have explained, is just a certain causal network of events. The mind-brain identity theory in the version that I defend says that some of the events in this network are (nothing but) genuinely mental events (instances of intentional states, or of pains, or the like). Epiphenomenalism says that the mental events "dangle" from the main net (the brain) by causal connections which are always one way (from brain to dangler). (Epiphenomenalism is, I believe, obviously, though contingently, false.) Interactionism says that there are causal connections in both directions but that the mental events are somehow in a different realm from the brain events. (How come a "different realm" or whatever? Question: Is there a real difference between interactionism and identity theory in an event ontology?)
- Assuming that Searle would accept the event ontology, if no more than for the sake of discussion, would he say that mental events, in general, and instances of intentional states, in particular, are parts of the machine, or is it his position that they are just products of the machine? That is, would Searle be inclined to accept the identity thesis, or would he lean toward either epiphenomenalism or interactionism? For my money, in such a context, the identity theory seems by far the most plausible, elegant, and economical guess. To be sure, it must face serious and, as yet, completely unsolved problems, such as the "grain objection" (see, for example, Maxwell 1978), and emergence versus panpsychism (see, for example, Popper & Eccles 1977), but I believe that epiphenomenalism and interactionism face even more severe difficulties.
- Before proceeding, I should emphasize that contemporary scientific knowledge not only leads us to an event ontology but also that it indicates the falsity of naive realism and "gently persuades" us to accept what I have (somewhat misleadingly, I fear) called "structural realism." According to this, virtually all of our knowledge of the physical world is knowledge of the structure (including space-time structure) of the causal networks that constitute it. (See, for example, Russell 1948 and Maxwell 1976). This holds with full force for knowledge about the brain (except for a very special kind of knowledge, to be discussed soon). We are, therefore, left ignorant as to what the intrinsic (nonstructural) properties of "matter" (or what contemporary physics leaves of it) are. In particular, if only we knew a lot more neurophysiology, we would know the structure of the (immense) causal network that constitutes the brain, but we would not know its content; that is, we still wouldn't know what any of its constituent events are. Identity theory goes a step further and speculates that some of these events just are (instances of) our intentional states, our sensations, our emotions, and so on, in all of their genuinely mentalistic richness, as they are known directly "by acquaintance." This is the "very special knowledge" mentioned above, and if identity theory is true, it is knowledge of what some (probably a very small subset) of the events that constitute the brain are. In this small subset of events we know intrinsic as well as structural properties.
- Let us return to one of the questions posed by Searle: "could an artifact, a man-made machine, think?" The answer he gives is, I think, the best possible one, given our present state of unbounded ignorance in the neurosciences, but I'd like to elaborate a little. Since, I have claimed above, thoughts and other (genuinely) mental events are part of the hardware of "thinking machines," such hardware must somehow be got into any such machine we build. At present we have no inkling as to how this could be done. The best bet would seem to be, as Searle indicates, to "build" a machine (out of protoplasm) with neurons, dentrites, and axons like ours, and then to hope that, from this initial hardware, mental hardware would be mysteriously generated (would "emerge"). But this "best bet" seems to me extremely implausible. However, I do not conclude that construction of a thinking machine is (even contingently, much less logically) impossible. I conclude, rather, that we must learn a lot more about physics, neurophysiology, neuropsychology, psychophysiology, and so on, not just more details - but much more about the very foundations of our theroretical knowledge in these areas, before we can even speculate with much sense about building thinking machines. (I have argued in Maxwell 1978 that the foundations of contemporary physics are in such bad shape that we should hope for truly "revolutionary" changes in physical theory, that such changes may very well aid immensely in "solving the mind-brain problems," and that speculations in neurophysiology and perhaps even psychology may very well provide helpful hints for the physicist in his renovation of, say, the foundations of space-time theory.) Be all this as it may, Searle has shown the total futility of the strong AI route to genuine artificial intelligence.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)