Glossary

Symbolic AI

Symbolic AI, sometimes called classical AI or GOFAI ("Good Old-Fashioned AI", a phrase coined by the philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea), is the tradition of AI research in which intelligence is modelled as the manipulation of explicit symbolic representations through formally defined rules. Knowledge is encoded in logical formulas, frames, semantic networks, production rules or similar structures; reasoning proceeds by inference, search or pattern-matching over those structures.

Philosophical core

The tradition was articulated philosophically by Allen Newell and Herbert Simon's 1976 ACM Turing Award lecture, Computer Science as Empirical Inquiry: Symbols and Search, which advanced the physical symbol system hypothesis:

A physical symbol system has the necessary and sufficient means for general intelligent action.

On this view, what matters about a brain is not that it is wet or biological but that it implements a symbol system. The hypothesis was an empirical claim, to be falsified or supported by what such systems could be made to do, and it animated AI research for thirty years.

Productions

Symbolic AI's prominent products include:

  • Theorem provers: the Logic Theorist (Newell, Shaw, Simon 1956), resolution-based provers (Robinson 1965), Boyer–Moore (1970s), ACL2, HOL Light, Coq, Lean, descendants still used in formal mathematics today.
  • Problem solvers: the General Problem Solver (GPS) (Newell & Simon 1959), means-ends analysis.
  • Expert systems: DENDRAL (Feigenbaum, Buchanan, Lederberg 1965) for molecular structure inference; MYCIN (Shortliffe 1972) for antibiotic recommendation; XCON / R1 (McDermott 1980) for VAX configuration at DEC, which saved DEC tens of millions of dollars annually and is sometimes credited with making expert systems commercially serious.
  • Planners: STRIPS (Fikes & Nilsson 1971), NOAH, SHOP, Graphplan, FF, FastDownward.
  • Knowledge bases: CYC (Lenat 1984–), a multi-decade project to encode common-sense knowledge in formal logic.
  • Logic programming: Prolog (Colmerauer & Kowalski 1972), Datalog, answer-set programming.
  • Constraint satisfaction: backtracking, AC-3, arc consistency, the entire CSP literature.

Decline and renaissance

Symbolic AI dominated from the 1956 Dartmouth workshop until roughly 2012, when the AlexNet result of Krizhevsky, Sutskever and Hinton on ImageNet shifted the field decisively toward sub-symbolic, neural-network-based approaches. Symbolic methods had struggled with three persistent problems:

  1. Knowledge acquisition: hand-coding rules is brittle and expensive. Lenat's CYC project, with hundreds of person-years of input, never reached the threshold of robust common-sense reasoning.
  2. Handling perception and noise: classical symbol systems assume crisp predicates, but the real world delivers pixels, waveforms and ambiguous sentences.
  3. Scaling: many symbolic algorithms suffer combinatorial explosion on realistic problem sizes.

Symbolic methods have nonetheless enjoyed a quiet renaissance in specific domains:

  • SAT solvers (Chaff, MiniSat, Glucose, Kissat) and SMT solvers (Z3, CVC5) underlie modern formal verification, software model checking and hardware design.
  • Classical planners (FastDownward, LAMA) power operations research, robotics task planning and aerospace autonomy.
  • Theorem provers (Lean, Coq, Isabelle) underpin certified mathematics, Gonthier's four-colour-theorem proof, Hales' Kepler-conjecture proof, and the Mathlib library.
  • Knowledge graphs (Wikidata, Google's Knowledge Graph, biomedical ontologies) are symbolic structures embedded in modern web infrastructure.

Neuro-symbolic integration

Neuro-symbolic AI, combining neural perception with symbolic reasoning, is an active research front in 2020s AI. Examples include AlphaGeometry (DeepMind 2024), which couples a transformer with a symbolic deduction engine to solve IMO-level geometry; DeepProbLog, which integrates probabilistic logic programs with neural predicates; and the use of large language models as orchestrators that call symbolic tools (calculators, SQL engines, theorem provers) via tool-use APIs. The hope is that the strengths of the two paradigms, neural perception and symbolic precision, learning and verification, intuition and logic, will turn out to be complementary rather than rival, and that the next generation of AI systems will be hybrid by design.

Related terms: Logic Programming, Expert System, Prolog, CYC

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).