‘I Married a Computer’: An Exchange (between Ray Kurzweil and John Searle)
Kurzweil (Ray)
Source: New York Review of Books, May 20th 1999
Paper - Abstract

Paper SummaryBooks / Papers Citing this PaperNotes Citing this PaperText Colour-Conventions


Full Text (From Web Link)

  1. John Searle starts with a distorted caricature of my book and then attacks the caricature [“I Married a Computer,” NYR, April 8]. Had I written the book he describes, I would attack it also. It is impossible here to unravel the thicket of Searle’s misunderstandings, massive misrepresentations, out-of-context quotes, and philosophical sleights of hand, so I direct the reader to Web Link. The following offers a few salient observations.
  2. Searle writes that I “frequently cite IBM’s Deep Blue as evidence of superior intelligence in the computer.” The opposite is the case: I cite Deep Blue to examine the “human and [contemporary] machine approaches to chess…not to belabor the issue of chess, but rather because [they] illustrate a clear contrast” (p. 289). Human thinking follows a very different paradigm. Solutions emerge in the human brain from the unpredictable interaction of millions of simultaneous self-organizing chaotic processes. There are profound advantages to the human paradigm: we can recognize and respond to extremely subtle patterns. But we can build machines the same way.
  3. Searle says that my book “is an extended reflection of the implications of Moore’s Law.” But the exponential growth of computing power is only a small part of the story. As I repeatedly state, adequate computational power is a necessary but not sufficient condition to achieve human levels of intelligence. Searle essentially doesn’t mention my primary thesis: we are learning how to organize these increasingly formidable resources by reverse engineering the human brain itself. By examining brains in microscopic detail, we will be able to re-create and then vastly extend these processes.
  4. Searle is best known for his “Chinese Room” analogy and has presented various formulations of it over twenty years (see web posting). His descriptions illustrate a failure to understand the essence of either brain processes or non-biological processes that could replicate them. Searle starts with the assumption that the man in the room doesn’t understand anything because, after all, “he is just a computer,” thereby illuminating Searle’s own bias. Searle then concludes—no surprise—that the computer doesn’t understand. Searle combines this tautology with a basic contradiction: the computer doesn’t understand Chinese, yet (according to Searle) can convincingly answer questions in Chinese. But if an entity—biological or otherwise—really doesn’t understand human language, it will quickly be unmasked by a competent interlocutor. In addition, for the program to convincingly respond, it would have to be as complex as a human brain. The observers would long be dead while the man in the room spends millions of years following a program billions of pages long.
  5. Most importantly, the man is acting only as the central processing unit, a small part of a system. While the man may not see it, the understanding is distributed across the entire pattern of the program itself and the billions of notes he would have to make to follow the program. I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and inter-neuronal connections.
  6. Searle writes that I confuse a simulation from a re-creation of the real thing. What my book actually talks about is a third category: functionally equivalent re-creation. He writes that we could not stuff a pizza into a computer simulation of the stomach and expect it to be digested. But we could indeed accomplish this with a properly designed artificial stomach. I am not talking about a mere “simulation” of the human brain as Searle construes it, but rather functionally equivalent re-creations of its causal powers. We already have functionally equivalent replacements of portions of the brain to overcome such disabilities as deafness and Parkinson’s disease.
  7. Well, I haven’t even touched on the issue of consciousness (see my posting). Searle writes: “It is out of the question…to suppose that…the computer is conscious.” Given this assumption, Searle’s conclusions are no surprise. Amazingly, Searle writes that “human brains cause consciousness by… specific neurobiological processes.” Now who is being the reductionist here? Searle would have us believe that you can’t be conscious if you don’t squirt neurotransmitters (or some other specific biological process). No entities based on functionally equivalent processes need apply. This biology-centric view of consciousness is likely to go the way of other human-centric beliefs. In my view, we cannot penetrate subjective experience with objective measurement, which is why many classical approaches to its understanding quickly hit a wall.
  8. Searle’s slippery and circular arguments aside, non-biological entities, which today have many narrowly focused skills, are going to vastly expand in the breadth, depth, and subtlety of their intelligence and creativity. My book discusses the impact this will have on our human-machine civilization (including just the sorts of legal issues that Searle claims I ignore), a development no less important than the emergence of human intelligence some thousands of generations ago.

Comment:

Response to Web Link; see "Kurzweil (Ray) - ‘I Married a Computer’: An Exchange (between Ray Kurzweil and John Searle)". Related to "Kurzweil (Ray) - The Age of Spiritual Machines".

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2017
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - December 2017. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page