‘I Married a Computer’: An Exchange (between Ray Kurzweil and John Searle)
Searle (John)
Source: New York Review of Books, May 20th 1999
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this PaperColour-ConventionsDisclaimer


Full Text (From NY Books: Kurzweil & Searle - I Married a Computer)

  1. Ray Kurzweil claims that I presented a “distorted caricature” of his book, but he provides no evidence of any distortion. In fact I tried very hard to be scrupulously accurate both in reporting his claims and in conveying the general tone of futuristic techno-enthusiasm that pervades the book. Here are the theses in his book that I found most striking:
    • 1. Kurzweil thinks that within a few decades we will be able to download our minds onto computer hardware. We will continue to exist as computer software. “We will be software, not hardware” (p. 129, his italics). And “the essence of our identity will switch to the permanence of our software” (p. 129).
    • 2. According to him, we will be able to rebuild our bodies, cell by cell, with different and better materials using “nanotechnology.” Eventually, “there won’t be a clear difference between humans and robots” (p. 148).
    • 3. We will be immortal, not only because we will be made of better materials, but because even if we were destroyed we will keep copies of our programs and databases in storage and can be reconstructed at will. “Our immortality will be a matter of being sufficiently careful to make frequent backups,” he says, adding the further caution: “If we’re careless about this, we’ll have to load an old backup copy and be doomed to repeat our recent past” (p. 129). (What is this supposed to mean? That we will be doomed to repeat our recent car accident and spring vacation?)
    • 4. We will have overwhelming evidence that computers are conscious. Indeed there will be “no longer any clear distinction between humans and computers” (p. 280).
    • 5. There will be many advantages to this new existence, but one he stresses is that virtual sex will soon be a “viable competitor to the real thing,” affording “sensations that are more intense and pleasurable than conventional sex” (p. 147).
  2. Frankly, had I read this as a summary of some author’s claims, I might think it must be a “distorted caricature,” but Kurzweil does in fact make each of these claims, as I show by extensive quotation. In his letter he does not challenge me on any of these central points. He concedes by his silence that my understanding of him on these central issues is correct. So where is the “distorted caricature”?
  3. I then point out that his arguments are inadequate to establish any of these spectacular conclusions. They suffer from a persistent confusion between simulating a cognitive process and duplicating it, and an even worse confusion between the observer-relative, in-the-eye-of-the-beholder sense of concepts like intelligence, thinking, etc., and the observer-independent intrinsic sense.
  4. What has he to say in response? Well, about the main argument he says nothing. About the distinction between simulation and duplication1, he says he is describing neither simulations of mental powers nor re-creations of the real thing, but “functionally equivalent re-creation.” But the notion “functionally equivalent” is ambiguous precisely between simulation and duplication2. What exactly functions to do exactly what? Does the computer simulation function to enable the system to have external behavior which is as if it were conscious, or does it function to actually cause internal conscious states? For example, my pocket calculator is “functionally equivalent” to (indeed better than) me in producing answers to arithmetic problems, but it is not thereby functionally equivalent to me in producing the conscious thought processes that go with solving arithmetic problems. Kurzweil’s argument about consciousness is based on the assumption that the external behavior is overwhelming evidence for the presence of the internal conscious states. He has no answer to my objection that once you know that the computer works by shuffling symbols, its behavior is no evidence at all for consciousness. The notion of functional equivalence does not overcome the distinction between simulation and duplication3, it just disguises it for one step.
  5. In his letter he tells us he is interested in doing “reverse engineering” to figure out how the brain works. But in the book there is virtually nothing about the actual working of the brain and how the specific electro-chemical properties of the thalamo-cortical system could produce consciousness. His attention rather is on the computational advantages of superior hardware.
  6. On the subject of consciousness there actually is a “distorted caricature,” but it is Kurzweil’s distorted caricature of my arguments. He says, “Searle would have us believe that you can’t be conscious if you don’t squirt neurotransmitters (or some other specific biological process).” Here is what I actually wrote: “I believe there is no objection in principle to constructing an artificial hardware system that would duplicate4 the causal powers of the brain to cause consciousness using some chemistry different from neurons.” Not much about the necessity of squirting neurotransmitters there. The point I made, and repeat here, is that because we know that brains cause consciousness with specific biological mechanisms, any non-biological mechanism has to share with brains the causal power to do it. An artificial brain might succeed by using something other than carbon-based chemistry, but just shuffling symbols is not enough, by itself, to guarantee those powers. Once again, he offers no answer to this argument.
  7. He challenges my Chinese Room Argument, but he seriously misrepresents it. The argument is not the circular claim that I do not understand Chinese because I am just a computer, but rather that I don’t as a matter of fact understand Chinese and could not acquire an understanding by carrying out a computer program. There is nothing circular about that. His chief counterclaim is that the man is only the central processing unit, not the whole computer. But this misses the point of the argument. The reason the man does not understand Chinese is that he does not have any way to get from the symbols, the syntax, to what the symbols mean, the semantics. But if the man cannot get the semantics from the syntax alone, neither can the whole computer. It is, by the way, a misunderstanding on his part to think that I am claiming that a man could actually carry out the billions of steps necessary to carry out a whole program. The point of the example is to illustrate the fact that the symbol manipulations alone, even billions of them, are not constitutive of meaning or thought content, conscious or unconscious. To repeat, the syntax of the implemented program is not semantics.
  8. Concerning other points in his letter: He says that I am wrong to think that he attributes superior thinking to Deep Blue. But here is what he wrote in response to the charge that Deep Blue just does number crunching and not thinking: “One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to think very much during the tournament” (p. 290).
  9. He also says that on his view Moore’s Law is only a part of the story. Quite so. In my review I mention other points he makes such as, importantly, nanotechnology.
  10. I cannot recall reading a book in which there is such a huge gulf between the spectacular claims advanced and the weakness of the arguments given in their support. Kurzweil promises us our minds downloaded onto decent hardware, new bodies made of better stuff, evolution without DNA, better sex without the inconvenience of actual partners, computers that convince us that they are conscious, and above all personal immortality. The main theme of my review is that the existing technological advances that are supposed to provide evidence in support of these predictions, wonderful though they are, offer no support whatever for these spectacular conclusions. In every case the arguments are based on conceptual confusions. Increased computational power by itself is no evidence whatever for consciousness in computers. On these central issues, Kurzweil’s letter is strangely silent.

Comment:

Related to "Kurzweil (Ray) - The Age of Spiritual Machines". Response to "Kurzweil (Ray) - ‘I Married a Computer’: An Exchange (between Ray Kurzweil and John Searle)".

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2020
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - Oct 2020. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page