- You all know about the importance of being Ernest. This paper is about the importance of being Oscar. Who, might you ask, is Oscar? Oscar is the little man who lives in my computer and reasons just like we do, or, at least, almost just like we do. Let me tell you about Oscar.
- One of the major accomplishments of contemporary epistemology has been the recognition that most reasoning is defeasible, in the sense that reasons may justify a conclusion when taken alone, but no longer justify that conclusion when additional "defeating" information is added to them. This is what computer scientists working in Al call 'non-monotonic reasoning'. Philosophers know a lot about some aspects of defeasible reasoning. We know about prima facie reasons and defeaters, and we know quite a bit about what prima facie reasons there are. But we do not have a good understanding of precisely how these constituents are put together in reasoning to arrive at conclusions. Our situation is analogous to knowing what primitive logical entailments there are, but not knowing the principles for constructing deductive arguments out of those entailments. The purpose of this paper is to investigate the structure of defeasible reasoning. Given an array of defeasible and non-defeasible reasons, how are they to be used in drawing conclusions?
- A satisfactory theory of defeasible reasoning ought to be sufficiently precise that we can actually program a computer to reason in that way. This may seem like a pedestrian point, but I have found it to be of overwhelming practical importance in getting the theory of defeasible reasoning right. Constructing a computer program to implement a theory of reasoning and then seeing what the program does is an extremely useful tool for coming to understand your own theory. My experience has been that the program almost invariably does unexpected things. We are frequently in a position of knowing what conclusions should be obtainable from a certain set of inputs, and if those conclusions are not forthcoming from the program then we know that there is something wrong with the theory. Analyzing why the program yields the wrong result is a powerful technique for discovering flaws in the original philosophical theory. In effect, the computer program is a mechanical aid in the construction of counterexamples. If the philosophical theory is sufficiently complex, this technique can lead to the discovery of counterexamples that would probably never have been found from the comfort of your armchair.
- The implementation of theories of reasoning on computers is useful in another way as well. Somewhat paradoxically, it helps in assuring that the theories are psychologically realistic. That a theory can be implemented on a computer does not guarantee that that is the way people work, but if it cannot be implemented on a computer, that pretty much guarantees that people do not and cannot work that way. Also, in trying to figure out how some feature of reasoning works, it often helps to approach the problem from an "engineering" perspective. If you have at least a rough idea of what a reasoning system should accomplish, ask yourself how you might build something that does that. That tends to yield valuable insights into the way human reasoning works. Frequently, the only obvious solution to a problem of system design turns out upon reflection to be precisely the solution adopted by human beings. So appeal to computers can help both in constructing theories of reasoning and in testing them.
- In light of these observations, we can think of Al work on non- monotonic reasoning and philosophical work on the logical analysis of defeasible reasoning, as addressing the same problem from two different perspectives. So what I am going to do here is construct a theory of defeasible reasoning, describe a computer program called 'OSCAR' that implements the theory, and then say a few words about how well OSCAR performs.
Text Colour Conventions (see disclaimer)
- Blue: Text by me; © Theo Todman, 2019
- Mauve: Text by correspondent(s) or other author(s); © the author(s)