- A week ago, I heard James Conant give a talk at Tufts, entitled “Two Varieties of Skepticism” in which he distinguished two oft-confounded questions:
- Descartes: How is it possible for me to tell whether a thought of mine is true or false, perception or dream?
- Kant: How is it possible for something even to be a thought (of mine)? What are the conditions for the possibility of experience (veridical or illusory) at all?
- Conant’s excellent point was that in the history of philosophy, up to this very day, we often find philosophers talking past each other because they don’t see the difference between the Cartesian question (or family of questions) and the Kantian question (or family of questions), or because they try to merge the questions. I want to add a third version of the question:
- Turing: How could we make a robot that had thoughts, that learned from “experience” (interacting with the world) and used what it learned the way we can do?
- There are two main reactions to Turing’s proposal to trade in Kant’s question for his.
- Cool! Turing has found a way to actually answer Kant’s question!
- Aaaargh! Don’t fall for it! You’re leaving out . . . experience!
- I’m captain of the A team (along with Quine, Rorty, Hofstadter, the Churchlands, Andy Clark, Lycan, Rosenthal, Harman, and many others). I think the A team wins, but I don’t think it is obvious. In fact, I think it takes a rather remarkable exercise of the imagination to see how it might even be possible, but I do think one can present a powerful case for it. As I like to put it, we are robots made of robots – we’re each composed of some few trillion robotic cells, each one as mindless as the molecules they’re composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent. Turing’s great contribution was to show us that Kant’s question could be recast as an engineering question. Turing showed us how we could trade in the first-person2 perspective of Descartes and Kant for the third-person perspective of the natural sciences and answer all the questions – without philosophically significant residue.
- David Chalmers is the captain of the B team, (along with Nagel, Searle, Fodor, Levine, Pinker, Harnad and many others). He insists that he just knows that the A team leaves out consciousness. It doesn’t address what Chalmers calls the Hard Problem. How does he know? He says he just does. He has a gut intuition, something he has sometimes called “direct experience.” I know the intuition well. I can feel it myself. When I put up Turing’s proposal just now, if you felt a little twinge, a little shock, a sense that your pocket had just been picked, you know the feeling too. I call it the Zombic3 Hunch ("Dennett (Daniel) - The Zombic Hunch: Extinction of an Intuition?"). I feel it, but I don’t credit it. I figure that Turing’s genius permitted him to see that we can leap over the Zombic Hunch. We can come to see it, in the end, as a misleader, a roadblock to understanding. We’ve learned to dismiss other such intuitions in the past – the obstacles that so long prevented us from seeing the Earth as revolving around the sun, or seeing that living things were composed of non-living matter. It still seems that the sun goes round the earth, and it still seems that a living thing has some extra spark, some extra ingredient that sets it apart from all non-living stuff, but we’ve learned not to credit those intuitions. So now, do you want to join me in leaping over the Zombic Hunch, or do you want to stay put, transfixed by this intuition that won’t budge? I will try to show you how to join me in making the leap.
- See Link.
- A written version of a debate with David Chalmers, held at Northwestern University, Evanston, IL,. February 15, 2001, supplemented by an email debate with Alvin Goldman.
Footnote 1: A written version of a debate with David Chalmers, held at Northwestern University, Evanston, IL,. February 15, 2001, supplemented by an email debate with Alvin Goldman.
Footnote 2: Footnote 3:
- For Zombies: Click here for Note.
- The hunch works for simple robots (and animals), but can be escaped as we increase complexity – for the reasons Dennett gives – because we are (nested) robots.
- But the degree of complexity needed may be enormous, and an insoluble “engineering problem” outside of biology.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2020
- Mauve: Text by correspondent(s) or other author(s); © the author(s)