- Artificial-intelligence research is undergoing a revolution. To explain how and why, and to put John R. Searle’s argument in perspective, we first need a flashback. By the early 1950’s the old, vague question, Could a machine think? had been replaced by the more approachable question, Could a machine that manipulated physical symbols according to structure-sensitive rules think? This question was an improvement because formal logic and computational theory had seen major developments in the preceding half-century. Theorists had come to appreciate the enormous power of abstract systems of symbols that undergo rule-governed transformations. If those systems could just be automated, then their abstract computational power, it seemed, would be displayed in a real physical system.
- This insight spawned a well-defined research program with deep theoretical underpinnings. Could a machine think? There were many reasons for saying yes. One of the earliest and deepest reasons lay in two important results in computational theory.
- The first was Church’s thesis, which states that every effectively computable function is recursively computable. Effectively computable means that there is a “rote” procedure for determining, in finite time, the output of the function for a given input. Recursively computable means more specifically that there is a finite set of operations that can be applied to a given input, and then applied again and again to the successive results of such applications, to yield the function’s output in finite time. The notion of a rote procedure is non-formal and intuitive; thus, Church’s thesis does not admit of a formal proof. But it does go to the heart of what it is to compute, and many lines of evidence converge in supporting it.
- The second important result was Alan M. Turing’s demonstration that any recursively computable function can be computed in finite time by a maximally simple sort of symbol-manipulating machine that has come to be called a universal Turing machine. This machine is guided by a set of recursively applicable rules that are sensitive to the identity, order and arrangement of the elementary symbols it encounters as input.
- These two results entail something remarkable, namely that a standard digital computer, given only the right program, a large enough memory and sufficient time, can compute any rule-governed input-output function. That is, it can display any systematic pattern of responses to the environment whatsoever.
- We, and Searle, reject the Turing test as a sufficient condition for conscious intelligence.
- At one level our reasons for doing so are similar: we agree that it is also very important how the input-output function is achieved; it is important that the right sorts of things be going on inside the artificial machine.
- At another level, our reasons are quite different. Searle bases his position on common-sense intuitions about the presence or absence of semantic content. We base ours on the specific behavioral failures of the classical SM machines and on the specific virtues of machines with a more brain-like architecture.
- These contrasts show that certain computational strategies have vast and decisive advantages over others where typical cognitive tasks are concerned, advantages that are empirically inescapable. Clearly, the brain is making systematic use of these computational advantages. But it need not be the only physical system capable of doing so. Artificial intelligence, in a non-biological but massively parallel machine, remains a compelling and discernible prospect.
- See Link.
- Authors' answer: Classical AI is unlikely to yield conscious machines; systems that mimic the brain might.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2019
- Mauve: Text by correspondent(s) or other author(s); © the author(s)