Computation and Cognition: Toward a Foundation of Cognitive Science
Pylyshyn (Zenon)
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Text Colour-ConventionsDisclaimerPapers in this BookBooks / Papers Citing this BookNotes Citing this Book



Back Cover Blurb


Contents
    Preface – xi
    • What Is Cognitive Science? – xi
    • Why Bother with Foundations? – xviii
    • Some Personal Background and Intellectual Debts – xxi
  1. The Explanatory Vocabulary of Cognition – 1
    • Cognitive Phenomena and Folk Psychology – 1
    • Capturing Generalizations – 6
    • The Stimulus-Independence of Cognitive Behavior – 12
    • Phenomena as "Events under Descriptions" – 16
  2. The Explanatory Role of Representations – 23
    • Introduction – 23
    • The Appeal to Representations – 24
    • Representational and Functional Levels – 28
    • Representational Content as Defining a Level of Description – 32
      → Levels and Constraints on Realizability – 35
    • Where Does the Semantic Interpretation Come From? – 38
    • End Note 1: When Are Functional States Representations? – 45
    • End Note 2: On the Notion of Representational Content – 48
  3. The Relevance of Computation – 49
    • Some Background: Formalism, Symbols, and Mechanisms – 49
    • Computation as a Symbolic Process – 54
    • Syntax and Semantics in Computation – 59
      → A Numerical Example – 59
      → Cognitive Rules and Syntax – 62
      → A Note on the Notion of "Computing" – 69
    • The Role of Computer Implementation – 74
      → The Control Issue – 78
  4. The Psychological Reality of Programs: Strong Equivalence – 87
    • The Role of Functional Architecture – 87
    • Functional Architecture and Computer Programs – 93
      → Algorithms' Dependence on Functional-Architecture – 95
      Functional Architecture and Mental Processes – 101
  5. Constraining Functional Architecture – 107
    • The Level-of-Aggregation Problem – 107
    • Some Methodological Proposals – 111
    • Complexity Equivalence – 114
      → Strong-Equivalence and Reaction-Time Measures – 120
      Cognitive Penetrability – 130
      → Is Everything Cognitively Penetrable? – 140
  6. The Bridge from Physical to Symbolic: Transduction – 147
    • Introduction – 147
      → Contact between Mechanism and Environment – 149
      → The Transducer Function – 151
    • Criteria for a Psychological Transducer – 153
      → The Function of a Transducer Is Nonsymbolic – 154
      → A Transducer Is Primarily Stimulus-Bound – 155
      → A Transducer Output Is an Atomic Symbol (or n-tuple) – 158
      → Transducer Inputs Must Be Stated in the Language of Physics – 165
    • Additional Constraints on Transducers – 171
      → Can Transducers Be Identified by Neurophysiological Methods? – 171
      → Can Transducers Be Identified by Psychophysical Methods? – 173
      → Can Transducers Be Identified by Functional Analysis? – 176
      → Summary – 178
    • Some Consequences of this Approach: Is Perception “Direct'? – 179
      → “Ecological Optics" and the Perception of Spatial Layout – 180
      → Detecting Invariance – 187
      → Temporal Order and Memory – 188
  7. Functional Architecture and Analogue Processes – 193
    • Reasoning and the Language of Thought – 193
    • The Concept of an Analogue Process – 197
      → The Attraction of Analogues – 197
      → What Makes Something an Analogue Process? – 199
    • Forms of Constraint on Processes: The Notion of Capacity – 203
      → Intrinsic versus Semantic Constraints – 205
      → Explanation and Functional Architecture – 210
      The Search for Fixed Architectural Functions – 216
  8. Mental Imagery and Functional Architecture 225
    • Introduction: The Autonomy of the Imagery Process – 225
    • Tacit Knowledge and "Mental Scanning" – 231
      → The Empirical Phenomena: Mental Scanning – 231
      → Some Preliminary Considerations – 233
      → Task Demands of Imagery Experiments – 236
      → The Generality of the "Tacit Knowledge" View – 238
      → Some Empirical Evidence – 242
    • Other Arguments against the Tacit-Knowledge View – 245
      → Access to Tacit Knowledge – 245
      → Combining Imagery and Vision – 247
    • What Theoretical Claim About Imagery Is Being Made? – 251
  9. Epilogue: What Is Cognitive Science the Science of? – 257
    • Summary of Assumptions – 257
    • Carving Nature at the Joints – 263
    • Some Possible Noncognitive Effects: Learning, Development, and Moods – 266
    References – 273
    Index – 285

Preface
  • What Is Cognitive Science?
    • This book concerns the foundational assumptions of a certain approach to the study of the mind, which has lately become known as cognitive science. A more realistic way to put it might be £o say that the material contained here is to the foundations of cognitive science what a hole in the ground is to the foundation of a house. It may not seem like what you eventually hope to have, but you do have to start some place. It is possible that despite considerable recent interest in the field, and despite the appearance of much commonality, cognitive science may have no single foundation; it may be just an umbrella title for a number of different sciences, all of which are, like the proverbial blind men trying to understand the elephant, attempting to understand the workings of the mind. If that is the case, cognitive science might be simply a political union based on an interest in a broad set of questions, and perhaps on a shared need for certain techniques, say, experimental methods or the techniques of computer simulation. Academic departments, such as schools of engineering or departments of psychology, are probably based on just such ties.
    • But there is another, much more exciting possibility: the prospect that cognitive science is a genuine scientific domain like the domains of chemistry, biology, economics, or geology. In scientific domains it is possible to develop theories based on a special vocabulary or reasonably uniform set of principles independent of the principles of other sciences—that is, principles with considerable autonomy. Many feel, as I do, that there may well exist a natural domain corresponding roughly to what has been called "cognition,” which may admit of such a uniform set of principles. Just as the domain of biology includes something like all living things (for which a strict definition is probably impossible outside of biological theory), so the domain of cognitive science may be something like "knowing things,” or, as George Miller (1984) colorfully dubbed it, the "informavores.”
    • Humans are living things, and consequently advances in biological science will contribute to a fuller understanding of human nature. Similarly, because we are informavores, or cognizers, understanding human nature can also gain from the study of principles governing members of that domain. At the moment it appears that included in this category are the higher vertebrates and certain computer systems. In any case, in view of the kinds of considerations we will explore in this book, and the impressive successes in recent work on artificial intelligence, one ought to take such a possibility seriously.
    • Lest the prospect of being a sibling of the computer appear as disturbing as the prospect of being the nephew or niece of the great ape once was, we should keep in mind that these are merely ways of classifying individuals for the purpose of discovering some of their operating principles. After all, we are classified along with rocks, atoms, and galaxies for the purpose of revealing how we move in response to physical forces. No classification—including "parent," "sentient being," or "fallen angel"—can capture all that is uniquely human. Indeed, even considering all the natural kinds to which we belong will not render a vision of humans as "merely" something or other; but each gives us special insight into some aspect of our nature.
    • What, then, is the common nature of members of the class of cognizers? I will suggest that one of the main things cognizers have in common is, they act on the basis of representations. Put another way, to explain important features of their behavior, we must take into account their (typically tacit) knowledge and goals. Knowing the representations they possess, together with the assumption that much of their behavior is connected with their representations by certain general principles, we can explain an important segment of the regularities in behavior exhibited by these cognizers. This view (sometimes called the "representational theory of mind") is what brings the study of cognition into contact with both classic philosophical problems (the sort of problem that concerned Franz Brentano when he talked about the "intentionality of the mental") and ideas from computer science. In chapters 1 and 2 I introduce this set of issues in the context of the demands placed on the task of providing psychological explanations.
    • Even assuming that this general picture is correct, it raises important and deep puzzles. How is it possible for a physical system (and I assume that cognizers are physical systems) to act on the basis of "knowledge of" objects and relations to which the system is not causally connected in the correct way? Clearly, the objects of our fears and desires do not cause behavior in the same way that forces and energy cause behavior in the physical realm. When my desire for the pot of gold at the end of the rainbow causes me to go on a search, the (nonexistent) pot of gold is not a causal property of the sort that is involved in natural laws. It appears that what is responsible is something we call a belief or "representation” whose semantic content is the hoped-for goal. If that is the case, however, it gives the explanation of my behavior a highly different character from the explanation of why, say, my automobile moves forward.
    • There is much we would like to know about this sort of explanation, and about the kinds of processes that demand such an explanation— including how these explanations are related to those that appeal to natural laws. We would also like to know just what sort of regularities are explainable precisely in this way. Surely not everything about an organism's behavior need be explained in this wav, for the organism also behaves in ways that can be explained quite well in terms of the laws of optics, acoustics, dynamics, chemistry, endocrinology, association, and so on. The natural category "cognizer" not only encompasses a population of objects such as people and computers but a subset of phenomena as well. Do we know what is included in this subset?
    • These are some of the issues for which we would like, if not final answers, at least a hint of the direction from which answers might come.
    • What I try to do in this book is address such questions in a way that places a premium on suggestive and provocative possibilities rather than on rigorous analyses. One of the central proposals that I examine is the thesis that what makes it possible for humans (and other members of the natural kind informavore) to act on the basis of representations is that they instantiate such representations physically as cognitive codes and that their behavior is a causal consequence of operations carried out on these codes. Since this is precisely what computers do, my proposal amounts to a claim that cognition is a type of computation. Important and far-reaching consequences follow if we adopt the view that cognition and computation are species of the same genus. Although "computer simulation" has been a source of much interest to psychologists for several decades now, it has frequently been viewed as either a useful metaphor or as a calculation device, a way to exhibit the consistency and completeness of independently framed theories in some domain of cognition1. The very term computer simulation suggests verisimilitude—imitation—rather than a serious proposal as to how things really are. This way of viewing the relevance of computation has unfortunate consequences. Thinking of a model as a metaphor or a mnemonic removes the need to be rigorous in the way we appeal to the model to explain behavior; what does not fit can be treated as the irrelevant aspect of the metaphor. (See the section "What Theoretical Claim about Imagery Is Being Made?" in chapter 8, for an example of the consequences of appealing to a metaphor.) If the model is empirically adequate, however, there is no need to qualify one's interpretation by taking refuge in a metaphor. After all, no one believes that physical theories are a metaphor, or that they specify a way of imitating nature. Nor does anyone believe that physical theories represent a counterfeit, a device for calculation. Physics does not claim that the world behaves "as if" it were following the laws of physics, whereas it might actually be behaving that way for an entirely different reason or reasons. Physics purports to present a true account of how things really are.
    • The difference between taking a system as depicting truth, literally, and taking it to merely behave "as if" it were running through the lines of a cleverly constructed copy of nature's script, may be only a difference in attitude, but it has a profound effect on the way science is practiced. An important aspect of the acceptance of a system as a true account of reality is that the scientist thus can see that certain further observations are possible, while others are not. In other words, taken realistically true scientific theories tell us what possibilities exist in nature, what might be observed under different conditions from those that obtain at present. According to this view, theories do much more than assert that when we make observations we find that certain things happen in certain ways, as is commonly believed by psychologists concerned with "accounting for variance." Building theories thus be- comes a way of perceiving, a way of thinking about the world, of seeing things in a new way.
    • Plane geometry is an outstanding example of how the acceptance, as a view of reality, of what was once a tool for calculation made a fundamental difference in science. The Egyptians were familiar with geometry and used it widely in surveying and building. The Greeks later developed geometry into an exquisite formal instrument. For the Egyptians, geometry was a method of calculation—like a system of ciphers—whereas for the Greeks it was a way of demonstrating the perfect order in that platonic world of which the observable one is but a shadow. It would be two millennia before Galileo began the trans- formation that resulted in the view so commonplace (and mundane) today that virtually no vestige remains of the Aristotelian ideas about natural places and motions. We conceive of space as a completely empty, infinite, three-dimensional, isotropic, disembodied receptacle distinct from the earth or any object that might be located on the earth, one that is capable of housing not only things but also such incorporeal mathematical entities as points and infinite straight lines. Such a strange idea—especially if it were taken to describe something that exists in this world-—was unthinkable before the seventeenth century; yet not even Galileo fully accepted the idea of such a world as real. For him, a "straight line" was still bound to the earth's surface. Not until Newton was the task of "geometrization of the world" (to use Butterfield's 1957 phrase) completed. The transformation that led to reification of geometry, though basically one of attitude and perception rather than of empirical observation, profoundly affected the course of science.
    • What would it take to treat computation the same way we treat geometry today—as a literal description of some aspect of nature (in this case, mental activity)? Clearly we are not justified in viewing any computer simulation of a behavioral regularity as a literal model of what happens in the mind. Most current, computational models of cognition are vastly under-constrained and ad hoc; they are contrivances assembled to mimic arbitrary pieces of behavior, with insufficient concern for explicating the principles in virtue of which such behavior is exhibited and with little regard for a precise understanding of the class of behaviors for which the model is supposed to provide an explanation. Although I believe that the computational view is the correct one, I have little doubt that we need to better understand the assumptions underlying this approach, while at the same time grasping what it would be like to use such models in a principled way to provide rigorous explanations—as opposed to merely mimicking certain observed behaviors. This much is clear: In order that a computer program be viewed as a literal model of cognition, the program must correspond to the process people actually perform at a sufficiently fine and theoretically motivated level of resolution. In fact, in providing such a model what we would be claiming is that the performances of the model and the organism were produced "in the same way" or that they were carrying out the same process. To conclude that the model and the organism are carrying out the same process, we must impose independent constraints on what counts as the same process. This requires a principled notion of "strong equivalence" of processes. Obviously, such a notion is more refined than behavioral equivalence or mimicry; at the very’ least it corresponds to equivalence with respect to a theoretically motivated set of basic operations. Further, what may count as a basic operation must be constrained independently. We cannot accept an operation as basic just because it is generally available on conventional computers. The operations built into production-model computers were chosen for reasons of economics, whereas the operations available to the mind are to be discovered empirically. Choosing a set of basic operations is tantamount to choosing an appropriate level of comparison, one that defines strong equivalence. Such a choice must be independently and empirically motivated.
    • The requirements of strong equivalence and independent constraints lead us to recognize a fundamental distinction between rule-governed or representation-governed processes and what I call functional architecture. By "functional architecture" I mean those basic information- processing mechanisms of the system for which a nonrepresentational or non-semantic account is sufficient. The operation of the functional architecture might be explained in physical or biological terms, or it might simply be characterized in functional terms when the relevant biological mechanisms are not known. This particular distinction is crucial to the enterprise of taking the computational view of mind seriously; thus considerable attention is devoted to it in chapter 4 and thereafter. For example, in chapter 5 several methodological criteria are discussed for determining whether a particular function should be viewed as falling on one or the other side of the cognitive-noncognitive or architecture-process boundary. As a rule, practical methodologies for making theoretically relevant distinctions such as this one develop with the growth of the science, and thus cannot be anticipated in advance. Nonetheless, I single out two possible methodological criteria as being especially interesting; they follow directly from the basic assumptions of the metatheory, and some version of them is fairly widely used in information-processing psychology. Although these criteria are analyzed at length, I will outline them as a way of providing a preview of some issues to be discussed.
    • The first criterion (or class of criteria) derives from ideas in computer science. It appeals to the notion of strong equivalence identified with what I call the "complexity-equivalence of computational processes." From this perspective two processes are equivalent only if they are indistinguishable in respect to the way their use of computational re- sources (such as time and memory) varies with properties of their input, or, more accurately, they are equivalent only if the relation between properties of their input and some index of resource use takes the same mathematical form for both processes. In order for this to be true, it must be the case that primitive operations have constant computational complexity or fixed resource-use. These notions are implicit in the use of such measures as reaction time for investigating cognitive processes. The second criterion attempts to draw the architecture-process boundary by distinguishing between systematic patterns of behavior that can be explained directly in terms of properties of the functional architecture and patterns that can be explained only if we appeal to the content of the information encoded. Much is made of this distinction throughout but especially in the discussions in chapters 5 and 7. For the purpose of using this distinction as a methodological criterion, what is important is the way in which the pattern of regularities can be systematically altered. In the case of patterns attributable to representations, often the regularities can be systematically altered by the in- formation conveyed by stimulus events. In such cases the change from one regular pattern of behavior to another, induced by such information- bearing "modulating" events, usually can be explained if we assume (1) that the organism interprets the "modulating" events in a particular way, so as to change its beliefs or goals, and (2) that the content of new beliefs and goals enter into the rational determination of actions. In practice this type of modulation is usually easily discerned, at least in humans, since it typically consists of the person "finding out" some- thing about a situation that normally produces a regular pattern of behavior. Thus, for example, the systematic pattern of behavior I normally exhibit in response to being offered money can be radically altered if I am given certain information (for example, that the money is counterfeit, that it was stolen, that I will have to give it back). This observation translates into the following methodological principle (discussed at length in chapter 5): If we can set up situations demonstrating that certain stimulus-response regularities can be altered in ways that follow these rational principles, we say that the input-output function in question is cognitively penetrable, concluding that at least some part of this function cannot be explained directly in terms of properties of the functional architecture; that is, it is not "wired in" but requires a cognitive, or computational, or representation-governed, explanation.
    • These two criteria are discussed in various parts of the book in connection with a number of alternative theories of aspects of cognition. In particular, they (among others) figure in constraining theories of perception to prevent the trivialization of the problem that one encounters in "direct realism," and they are appealed to in analyzing the notion of analogue process and in examining the proposal that imaginal reasoning involves a special kind of functional architecture or "medium" of representation.
    • The picture of cognitive (or "mental") processing we end up with is one in which the mind is viewed as operating upon symbolic representations or codes. The semantic content of these codes corresponds to the content of our thoughts (our beliefs, goals, and so on). Explaining cognitive behavior requires that we advert to three distinct levels of this system: the nature of the mechanism or functional architecture; the nature of the codes (that is, the symbol structures); and their semantic content (see chapter 9). This trilevel nature of explanation in cognitive science is a basic feature of the computational view of mind. It is also the feature that various people have objected to, on quite varied grounds. For example, some think all of it can be done solely at the level of mechanism, that is, in terms of biology; others (for example, Stich, 1984) believe it can be done without the semantic level. Others (for instance, Searle, 1980) claim that cognitive behavior can be explained without the symbol or the syntactic level. I discuss some of my reasons for believing we need all three levels—one of which is that different principles appear to govern each level, and that the levels appear to have considerable autonomy. One can describe regularities at each level, to a first approximation, without concern for the way the regularities are realized at the "lower” levels. Elaborating on this view (one which, I believe, is implicit in cognitive-science practice), and providing some justification for it, occupies much of this book. Readers will have to judge for themselves whether the justification is successful.
  • Why Bother with Foundations?
    • I have said that this book is intended merely to be a modest start in the task of developing an inventory and an understanding of some assumptions underlying cognitive science. Not everyone believes that this is a task worth starting at all. One frequently held view is that we should plunge ahead boldly, doing science as best we can, and leave questions of a more foundational nature for the historians and philosophers who follow—cleaning up behind us, as it were. I must confess that I have considerable sympathy for this point of view. Historically, good work is often (perhaps typically) done by people with inadequate understanding of the larger picture into which their work fits. Even when scientists indulge in painting the larger picture, they are frequently wrong in their analysis. The more frequent pattern is for scientists to "sleepwalk” their way to progress, as Koestler (1959) put it. So, why do I bother worrying about foundational questions?
    • The answer, I hope, will become evident as the story unfolds. The reason I personally bother (putting aside psychopathological reasons) is that I have been driven to it over the years, first, in trying to understand what I did not like about certain theories of imagery (for example, in Pylyshyn, 1973, 1978b, 1981), or certain claims about the value of computational models (Pascual-Leone, 1976; Dreyfus, 1972, to which I replied in Pylyshyn, 1974), or the non-determinacy of computational models (for example, Anderson, 1978, to whom I replied in Pylyshyn, 1979c). That, of course, may be a personal itch I have to scratch; yet, in my view, there is a more pressing need for the sort of inquiry I have undertaken in this book, quite apart from my personal proclivity. Contrary to some interpretations (for instance those expressed in the responses to Pylyshyn, 1980a, published in the same issue of the journal), this need has nothing to do with a desire to prescribe the field of cognitive science. Rather, it has to do with the need to expose what by now have become some stable intuitions which cognitive science researchers share and which to a great extent guide their work, but which have been articulated only in the most fragmentary ways.
    • Thus I see my role in writing this book as that of investigative reporter attempting to tease out the assumptions underlying the directions cognitive science research is taking and attempting to expose the foundations of this approach. The reason I am convinced that the project is worth undertaking is that cognitive science practice is far from neutral on these questions. I try to show (especially in chapters 6 to 8) that the kind of theories cognitive scientists entertain are intimately related to the set of tacit assumptions they make about the very foundations of the field of cognitive science. In cognitive science the gap between metatheory and practice is extremely narrow. There is a real sense in which the everyday research work done by scientists is conditioned by their foundational assumptions, often both implicit and unexamined. What convergence there is is due in large part to the existence of some commonality among these scientists' assumptions.
    • Many of the issues raised in this book have a long and venerable history within the philosophy of mind; many are hotly debated among philosophers today. Although I realize that I am adding to this debate, this book is not intended as primarily a philosophical analysis. For one thing, it is not written in the style of philosophical scholarship. I have tried to say what is implicit in the best cognitive science work being done, as well as defend the assumptions I think worth defending. As a result, I suspect that many psychologists and artificial-intelligence people will find the discussions somewhat on the philosophical or abstract side, missing the usual copious descriptions of experimental studies and models, while philosophers will miss the detailed argumentation, exegesis, and historical allusions. In addition, I have chosen not to provide a review of work in information-processing psychology (except for some of the results that bear directly on the points I am making) because of the many readable reviews available, for example, Massaro, 1975, and Posner, 1978.
    • As for the philosophical tradition, I have tried to present several different points of view which bear directly on the issues discussed. Many of the philosophical discussions of cognitivism are concerned with certain issues, however, such as the proper understanding of "meaning” or the role of consciousness, about which I simply have little to say. I am, in fact, treating these topics as among the "mysteries" (rather than the technical "puzzles") of cognitive science (to use Chomsky's [1976] useful distinction). While these questions may be fascinating, and while our understanding of the mind may benefit from a careful conceptual analysis of these issues, they may well be precisely the kind of questions that are irrelevant to making scientific progress. Indeed, they may be like the question, How is it possible to have action at a distance? Although physicists—or, as they were called until the nineteenth century, "natural philosophers"—debated such questions with much acrimony in the eighteenth century, the question remains unresolved today; people merely have decided that the question is not worth pursuing or that it is a question with no answer that falls within physical theory.
    • Similarly, some questions that appear to be philosophical in nature are largely empirical, in that they can be illuminated (or refined) only as the empirical and theoretical work takes shape. In such cases it may be that only limited analysis is worth doing in the absence of scientific understanding of the details of the phenomena. For example, while I devote some space to discussing "folk psychology," the question of whether it is misleading to view the mind in such terms cannot be entirely divorced from the development of detailed theories, for the reason that folk psychology is a mixture of generalizations held together by considerable mythical storytelling. Some generalizations are almost certain to be valid, whereas many explanations of their truth are likely to be false or at least not general enough. If it is true that folk psychology is a mixture of generalizations, then a developed cognitive theory is likely to contain some folk psychology terminology, but it will also leave much out, while adding its own constructs. I don't see how the development of the picture can be anticipated in detail except perhaps in the few areas where considerable progress is already evident (perhaps in such areas as psycholinguistics and vision). Thus I see no point in undertaking a defense (or a denial, as in Stich, 1984) of folk psychology in general.
    • These considerations, together with my personal predilections, have led to a style of presentation that is both eclectic and, at times, presumptive. I hope the benefits outweigh the discomfort the specialist may feel at having certain problems central to his or her discipline given less care here than they typically receive in the scholarly work of that field.
  • Some Personal Background and Intellectual Debts
    • Works such as this frequently are more a chronicle of personal search than they are the text of a message. This book has haunted me for more years than I ever imagined I could sustain an interest. I owe its completion to several pieces of good fortune. The first good fortune consisted of a number of fellowships and grants. My original ideas on the topics discussed here germinated long ago during 1969-70 spent at the Institute for Mathematical Studies in the Social Sciences at Stanford University, a year made possible by Patrick Suppes, the Foundations' Fund for Research in Psychiatry, and the Ontario Mental Health Foundation. Some of the ideas presented here appeared in a widely circulated but unpublished report (Pylyshyn, 1972), parts of which were later incorporated in my critique of models of mental imagery (Pylyshyn, 1973). Also, I considerably rethought these issues while a visiting professor at the MIT Artificial Intelligence Laboratory in 1975, thanks to Marvin Minsky, Seymour Papert, and Pat Winston. The first draft of what is questionably the same book as this was outlined while I was a fellow at the Center for Cognitive Science at the Massachusetts Institute of Technology in 1978-79. The bulk of the writing, however, was done in 1979-80, in idyllic circumstances provided by my fellowship at the Center for Advanced Study in the Behavioral Sciences, a special place that can make possible many things. I thank these institutions for their generosity, as well as the Social Science and Humanities Re- search Council of Canada for its support by granting a leave fellowship in 1978-79.
    • The second piece of good fortune was the generous provision of help by friends and colleagues. Two people, in particular, contributed so extensively to the ideas presented in this book that it would be difficult, not to mention embarrassing, to acknowledge it at every appropriate point in the text. The idea of functional architecture grew out of discussions with Allen Newell, discussions that began in the summers of 1972 and 1973 while I was a participant in a series of workshops in information-processing psychology at Carnegie Mellon University. Newell's production-system architecture (Newell, 1973b) was the catalyst for most of our discussions of what I initially referred to as ''theory-laden language" and later called "functional architecture." Our continuing dialogues, by telephone, computer networks, and face to face, helped me shape my ideas on this topic. If there is anything to the idea, Newell deserves much of the credit for introducing it. The second person who contributed basic ideas on the representational view of mind, and who patiently explained to me what philosophy was for, is Jerry Fodor. Fodor's influence is evident throughout this book. Were it not for his interest, I would probably have made fewer but less defensible claims than I do. In addition to contributing to the content of this book, Fodor and Newell, from nearly opposite poles of intellectual style, have the following important trait in common, one which undoubtedly has affected the writing of this book: they are among the few people in cognitive science who appear to be guided by the premise that if you believe P, and if you believe that P entails Q, then even if Q seems more than a little odd, you have some intellectual obligation to take seriously the possibility that Q may be true, nonetheless. That is what it means to take ideas seriously, and that is what I have been arguing we should do with the idea of cognition as computation.
    • Because various pieces of paper bearing the same title as this book have been around for so long, I have received much advice from numerous people on what to do with them. Among those whose advice I did not ignore entirely are Sue Carey, Ned Block, and Steve Kosslyn, colleagues at MIT who argued with me about mental imagery; Noam Chomsky and Hilary Putnam, who argued about the philosophy of mind; Dan Dennett, John Haugeland, Pat Hayes, John McCarthy, and Bob Moore, my colleagues at the Center for Advanced Study in the Behavioral Science, in Palo Alto; and John Biro, Bill Demopoulos, Ted Elcock, Ausonio Marras, Bob Matthews, and Ed Stabler, colleagues or visitors at the University of Western Ontario's Centre for Cognitive Science. In addition, the following people read parts of the manuscript and offered useful suggestions: Patricia Churchland, Bill Demopoulos, Dan Dennett, Jerry Fodor, Allen Newell, John Macnamara, Massimo Piatelli-Palmarini, Steve Pinker, and Ed Stabler. You might wonder (as does the publisher) why, with all this support and advice, the book was not ready years ago2.
    • Finally, a personal note. Any piece of work as involving and protracted as this inevitably takes its toll. I am thankful to those close to me for putting up with the somewhat higher level of neglect, not to mention abuse, that my attention to this book engendered.



In-Page Footnotes ("Pylyshyn (Zenon) - Computation and Cognition: Toward a Foundation of Cognitive Science")

Footnote 1:
  • For example, Juan Pascual-Leone characterizes computational models as follows: "Simulation models ... of modem psychology play an epistemic role not unlike the illustrative analogies, case examples, and language-games which Wittgenstein made popular in modem analytical philosophy .. . which the investigator uses as figurative supports (arguments) on which to apply his analytical mental operations." Pascual- Leone, 1976, p. 111.
  • Even the practicing computationalist John Anderson does not take computational models as making literal claims: "Our aspiration must be the correct characterization of the data. If a computer simulation model provides it, fine. On the other hand, there are those (for example, Newell and Simon, 1972) who seemed committed to arguing that at an abstract level, the human and the computer are the same sort of device. This may be true, but the problem is that a unique characterization of man's cognitive functioning does not exist. So it's ... pointless to try to decide what kind of device he is." Anderson, 1976, p. 15.
Footnote 2:
  • For those interested in such intimate details, the book was primarily composed using the EMACS editor at the MIT-AI Laboratory and at the Xerox Palo Alto Research Center (for the use of which 1 am very grateful), with final (massive) revisions using the Final Word editor on an IBM-PC. A handful of TECO macros prepared it for the Penta Systems International phototypesetting system at The MIT Press. Were it not for all this electronic wizardry, the book probably would have been shorter; but very likely it would still be in process.

Book Comment

Bradford Books, MIT Press, Cambridge, Massachusetts, Paperback Edition, 1986



Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2025
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - May 2025. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page