- It has long been my conviction that many of the problems encountered in artificial intelligence1 research are basically philosophical problems. In particular, in order to build an artificial rational agent, one must first have a clear theory of rationality to serve as a target for implementation. Accordingly, the OSCAR project is aimed at providing such a theory and building an Al system to implement it.
- In its present incarnation, OSCAR is a programmable architecture for a rational agent, based upon a general-purpose defeasible reasoner. To use this architecture to construct an actual agent, one must fill it out in various ways. This can be regarded as a matter of programming the architecture to implement proposed principles of rationality.
- For those who are skeptical about the very possibility of interesting Al systems, it is to be emphasized that this system is fully implemented, and available from the author for use by other researchers.
- I will begin by giving a very brief sketch of the general architecture, and then I will turn to some questions about practical reasoning that will constitute the main focus of this paper. These are questions that must be answered before OSCAR can become a full-fledged rational agent.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2021
- Mauve: Text by correspondent(s) or other author(s); © the author(s)