Glossary

PAC Learning

The PAC (Probably Approximately Correct) learning framework, introduced by Leslie Valiant in his 1984 paper A Theory of the Learnable, gave AI a precise computational definition of learning from examples. A concept class C is PAC-learnable if there exists a polynomial-time algorithm that, given m = poly(1/ε, 1/δ, n) examples of any target concept c ∈ C, with probability at least 1 − δ returns a hypothesis with error at most ε.

Two requirements distinguish PAC from earlier informal notions of learnability: probably, the algorithm is allowed to fail (return a bad hypothesis) with small probability δ; approximately correct, the returned hypothesis need not be perfect, just within ε of the target. The framework legitimised the modern statistical view of learning and gave it computational teeth.

Many concept classes were shown PAC-learnable: monomials, decision lists of bounded length, axis-aligned rectangles, k-DNF formulas with constant k. Many were shown PAC-hard under cryptographic assumptions: 3-DNF formulas, depth-3 threshold circuits. The boosting theorem (Schapire 1990, Freund 1990), that any weak learner can be boosted to a strong learner, is one of the most celebrated PAC results.

PAC theory developed in parallel with Vapnik's statistical learning theory; the two have converged into a unified mathematical framework. Modern PAC-Bayes bounds (McAllester, 1999) give some of the tightest non-vacuous generalisation bounds for neural networks. Valiant won the 2010 Turing Award in part for the PAC framework.

Related terms: Statistical Learning Theory, VC Dimension, Boosting

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).