Classical logic is monotonic: if a conclusion follows from premises $\Gamma$, it follows from any superset $\Gamma'$ ⊇ $\Gamma$. Adding information never invalidates a conclusion.
Non-monotonic reasoning drops this property: new information can retract conclusions. Essential for commonsense reasoning, where conclusions are typically defaults that hold "unless something exceptional applies".
Classic example:
- Given "Tweety is a bird" and the default "birds fly", conclude "Tweety flies".
- Adding "Tweety is a penguin", retract "Tweety flies".
Major non-monotonic formalisms:
Default logic (Reiter 1980): inference rules of the form $A : B / C$, "if $A$ is provable and $B$ is consistent, then conclude $C$". Multiple extensions (consistent sets of conclusions) may exist.
Circumscription (McCarthy 1980): minimise the extension of certain predicates. Captures the closed-world assumption: anything not explicitly stated to be abnormal is normal.
Autoepistemic logic (Moore 1985): reasoning about one's own beliefs. "If I have no reason to believe $\neg P$, conclude $P$".
Logic programming with negation as failure: Prolog's \+ P succeeds if $P$ cannot be proved. The closed-world assumption operationalised.
Belief revision (AGM postulates, Alchourrón-Gärdenfors-Makinson 1985): how should an agent update its beliefs given conflicting new information?
Connection to modern AI: non-monotonic reasoning was a major preoccupation of 1970s-1990s symbolic AI as a foundation for commonsense. Modern LLMs handle non-monotonic reasoning implicitly through learned statistical patterns, they revise stated facts in light of context , without explicit logical machinery. Whether this is "real" non-monotonic reasoning or merely simulation remains contested.
The frame problem is closely related: how to specify what doesn't change when an action is taken, without listing every non-effect. Both classic frame and non-monotonic literatures are foundational to modern reasoning-model and agent research.
Related terms: Frame Problem, Circumscription, Prolog, Knowledge Representation
Discussed in:
- Chapter 1: What Is AI?, A Brief History of AI