Why Computers Can't Act
Baker (Lynne Rudder)
Source: American Philosophical Quarterly, 18:157–63, 1981
Paper - Abstract

Paper StatisticsBooks / Papers Citing this PaperNotes Citing this PaperColour-ConventionsDisclaimer


Author’s Abstract

  1. There are numerous claims against artificial intelligence: computers have no natural interests: they are not properly embodied; they cannot handle ambiguities of various kinds, and so on. Although such claims of inadequacy have been mounted and rebutted with fervor, there is an equally profound deficiency that has not been noticed at all. Without denying that artificial models of intelligence may be useful for suggesting hypotheses to psychologists and neurophysiologists, I shall argue that there is a radical limitation to applying such models to human intelligence. And this limitation is exactly the reason why computers can't act.
  2. My argument that machines cannot act is extremely simply. It goes like this:
    • P1: In order to be an agent, an entity must be able to formulate intentions.
    • P2: In order to formulate intentions, an entity must have an irreducible first person perspective.
    • P3; Machines lack an irreducible first-person perspective.
    • C: Therefore, machines are not agents.
  3. Since the argument is clearly valid, we need only to determine that the premises are true.
  4. The first premise is simply a matter of definition. All actions are performed by agents, and agents may be defined as beings capable of formulating intentions. Intentions often are formulated in language but they need not be. For example, I may intend to get to a lecture on time without ever putting into words, "I'm going to get there on time." But in order to be an agent, a being must be capable of formulating such thoughts, whether he expresses them in a language or not. H.-N. Castaneda provides a convenient model: Intending is a dispositional mental state of endorsingly thinking such thoughts as "I shall do A." Such thought-contents Castaneda calls "practitions" to distinguish them as the practical counterparts of propositions. In linking a subject and an action practically, practitions have a causal thrust which propositions — e.g., propositions expressing predictions about oneself — lack.
  5. The second and third premises require elaboration and support. First, I shall explain the first-person perspective and then show why computers lack it; finally, and much more briefly, I shall argue that the first-person perspective is required in order to formulate intentions and hence in order to be an agent.

Comment:

See Link.

Text Colour Conventions (see disclaimer)

  1. Blue: Text by me; © Theo Todman, 2019
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - Oct 2019. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page