Introduction (Full Text – “Penrose vs AI - Again”)
- Roger Penrose's new book, Shadows of the Mind, is strongly reminiscent of his previous work in the same vein, The Emperor's New Mind. This book restates the author's central line of argument about the place of consciousness in the material world. He has no sympathy at all for attempts to work out a computationalist theory of mind, and instead pins his hopes on a future theory that would allow large-scale quantum-mechanical effects in the brain to play a central role.
- A broad outline of his argument goes like this:
- Because of Gödel's Incompleteness Theorem, mathematical insight cannot be mechanized.
- Mathematical insight depends on consciousness, and so it is doubtful that any part of consciousness can be mechanized.
- But then a physical system can be conscious only if it can't be simulated by a computer.
- That would be very strange; fortunately, the world as imagined in modern physics is very strange.
- The interaction between quantum mechanics1 and the general theory of relativity is poorly understood. Fundamental questions about time and causality2 seem to depend on how that interaction gets worked out.
- Perhaps the brain exploits some large-scale quantum coherence to achieve consciousness. Perhaps the site of this effect is in the cytoskeletons of neurons.
- This argument, when put down in black and white, seems extraordinarily weak. The least speculative step is the first, but that's also the easiest to show is fallacious, as I will do shortly. But before I do, I want to raise the question, Why is Penrose bothering?
- A clue might be this sentence on p. 373: "It is only the arrogance of our present age that leads so many to believe that we now know all the basic principles that can underlie all the subtleties of biological action." Penrose wants to do battle against the arrogance he perceives, especially in the AI community, regarding the problem of consciousness. It is true that AI has, from its inception, had the ambition to explain everything about the mind, including consciousness. But is this arrogance? Or merely the sincere adoption of a working hypothesis? If someone wants to work on the problem of mind, it seems to me that he must choose among three options: treat the brain as a computer, and study which parts compute what; study neurons, on the assumption that they might be doing something noncomputational; or work in a seemingly unrelated field, like physics, on the off chance that something relevant will turn up. In any case, no matter which tack is taken, one gets mighty few occasions to feel arrogant about one's success. Neuroscience and AI have made definite progress, and so has physics, for that matter, but their successes haven't resulted in a general theory of mind. If anything, AI seemed closer to such a theory thirty years ago than it seems now.
- So if someone wants to believe that AI will never explain the mind, he might as well. The burden of proof is on whoever claims it ultimately will. Penrose isn't satisfied with this state of affairs, however, and wants to exhibit a proof that a computationalist theory of mind is impossible. I suppose he sees himself fighting for the hearts and minds of neutral parties, who are in danger of being fooled into thinking that AI is on the verge of such a theory by the breathless stories they read in the papers. I don't know; perhaps an argument like Penrose's will, once it has been filtered through the distorting lens of the TV camera, be a sort of homeopathic antidote to those breathless stories. But, I regret to say, the argument would still be wrong. And so those of us in a position to point out the flaws in it must sheepishly rise to do so, in the full knowledge that AI can't win the debate if it degenerates into Mutual Assured Destruction ("You can't prove AI is possible," "Oh yeah? Well, you can't prove it's not").
Review of "Penrose (Roger) - Shadows of the Mind"; Link (Defunct).
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)