Full Text (From Link)
- John Searle starts with a distorted caricature of my book and then attacks the caricature [“I Married a Computer,” NYR, April 8]. Had I written the book he describes, I would attack it also. It is impossible here to unravel the thicket of Searle’s misunderstandings, massive misrepresentations, out-of-context quotes, and philosophical sleights of hand, so I direct the reader to Link. The following offers a few salient observations.
- Searle writes that I “frequently cite IBM’s Deep Blue as evidence of superior intelligence in the computer.” The opposite is the case: I cite Deep Blue to examine the “human and [contemporary] machine approaches to chess…not to belabor the issue of chess, but rather because [they] illustrate a clear contrast” (p. 289). Human thinking follows a very different paradigm. Solutions emerge in the human brain from the unpredictable interaction of millions of simultaneous self-organizing chaotic processes. There are profound advantages to the human paradigm: we can recognize and respond to extremely subtle patterns. But we can build machines the same way.
- Searle says that my book “is an extended reflection of the implications of Moore’s Law.” But the exponential growth of computing power is only a small part of the story. As I repeatedly state, adequate computational power is a necessary but not sufficient condition to achieve human levels of intelligence. Searle essentially doesn’t mention my primary thesis: we are learning how to organize these increasingly formidable resources by reverse engineering the human brain itself. By examining brains in microscopic detail, we will be able to re-create and then vastly extend these processes.
- Searle is best known for his “Chinese Room” analogy and has presented various formulations of it over twenty years (see web posting). His descriptions illustrate a failure to understand the essence of either brain processes or non-biological processes that could replicate them. Searle starts with the assumption that the man in the room doesn’t understand anything because, after all, “he is just a computer,” thereby illuminating Searle’s own bias. Searle then concludes—no surprise—that the computer doesn’t understand. Searle combines this tautology with a basic contradiction: the computer doesn’t understand Chinese, yet (according to Searle) can convincingly answer questions in Chinese. But if an entity—biological or otherwise—really doesn’t understand human language, it will quickly be unmasked by a competent interlocutor. In addition, for the program to convincingly respond, it would have to be as complex as a human brain. The observers would long be dead while the man in the room spends millions of years following a program billions of pages long.
- Most importantly, the man is acting only as the central processing unit, a small part of a system. While the man may not see it, the understanding is distributed across the entire pattern of the program itself and the billions of notes he would have to make to follow the program. I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and inter-neuronal connections.
- Searle writes that I confuse a simulation from a re-creation of the real thing. What my book actually talks about is a third category: functionally equivalent re-creation. He writes that we could not stuff a pizza into a computer simulation of the stomach and expect it to be digested. But we could indeed accomplish this with a properly designed artificial stomach. I am not talking about a mere “simulation” of the human brain as Searle construes it, but rather functionally equivalent re-creations of its causal powers. We already have functionally equivalent replacements of portions of the brain to overcome such disabilities as deafness and Parkinson’s disease.
- Well, I haven’t even touched on the issue of consciousness (see my posting). Searle writes: “It is out of the question…to suppose that…the computer is conscious.” Given this assumption, Searle’s conclusions are no surprise. Amazingly, Searle writes that “human brains cause consciousness by… specific neurobiological processes.” Now who is being the reductionist here? Searle would have us believe that you can’t be conscious if you don’t squirt neurotransmitters (or some other specific biological process). No entities based on functionally equivalent processes need apply. This biology-centric view of consciousness is likely to go the way of other human-centric beliefs. In my view, we cannot penetrate subjective experience with objective measurement, which is why many classical approaches to its understanding quickly hit a wall.
- Searle’s slippery and circular arguments aside, non-biological entities, which today have many narrowly focused skills, are going to vastly expand in the breadth, depth, and subtlety of their intelligence and creativity. My book discusses the impact this will have on our human-machine civilization (including just the sorts of legal issues that Searle claims I ignore), a development no less important than the emergence of human intelligence some thousands of generations ago.
Response to Link; see "Kurzweil (Ray) - ‘I Married a Computer’: An Exchange (between Ray Kurzweil and John Searle)". Related to "Kurzweil (Ray) - The Age of Spiritual Machines".
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2018
- Mauve: Text by correspondent(s) or other author(s); © the author(s)