- I shall focus this discussion on one small thread in the increasingly complex weave of artificial intelligence1 (Al) and philosophy of mind: the attempt to explain how rational thought is mechanically possible. This is, historically, the crucial place where Al meets philosophy of mind. But it is, I shall argue, a place in flux. For our conceptions of what rational thought and reason are, and of what kinds of mechanism might explain them, are in a state of transition.
- To get a sense of this sea change, I shall compare several visions and approaches, starting with what might be termed the Turing-Fodor conception of mechanical reason, proceeding through connectionism with its skill-based model of reason, then moving to issues arising from robotics, neuroscientific studies of emotion and reason, and work on “ecological rationality.” As we shall see, there is probably both more, and less, to human rationality than originally met the eye.
- Rationality, we have now seen, involves a whole lot more, and a whole lot less, than originally met the eye. It involves a whole lot more than local, syntax-based inference defined over tractable sets of quasi-sentential encodings. Even Fodor admits this – or at least, he admits that it is not yet obvious how to explain global abductive inference using such resources. It also involves a whole lot more than (as it were) the dispassionate deployment of information in the service of goals. For human reason seems to depend on a delicate interplay in which emotional responses (often unconscious ones) help sift our options and bias our choices in ways that enhance our capacities of fluent, reasoned, rational response. These emotional systems, I have argued, are usefully seen as a kind of wonderfully distilled store of hard-won knowledge concerning a lifetime’s experience of choosing and acting.
- But rationality may also involve significantly less that we tend to think. Perhaps human rationality (and I am taking that as our constant target) is essentially a quick-and-dirty compromise forged in the heat of our ecological surround. Fast and frugal heuristics, geared to making the most of the cheapest cues that allow us to get by, may be as close as nature usually gets to the space of reasons. Work in robotics and connectionism further contributes to this vision of less as more, as features of body and world are exploited to press maximal benefit from basic capacities of on-board, prototype-based reasoning. Even the bugbear of global abductive reason, it was hinted, just might succumb to some wily combination of fast and frugal heuristics and simple syntactic search.
- Where then does this leave the reputedly fundamental question “how is rationality mechanically possible?” It leaves it, I think, at an important crossroads, uncertainly poised between the old and the new. If (as I believe) the research programs described in sections 13.4-13.8 are each tackling important aspects of the problem, then the problem of rationality becomes, precisely, the problem of explaining the production, in social, environmental, and emotional context, of broadly appropriate adaptive response. Rationality (or as much of it as we humans typically enjoy) is what you get when this whole medley of factors are tuned and interanimated in a certain way. Figuring out this complex ecological balancing act just is figuring out how rationality is mechanically possible.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)