Glossary

Solomonoff Induction

Solomonoff induction, introduced by Ray Solomonoff in 1960–1964, is a theoretical universal procedure for predicting the next symbol in any computable sequence. The framework assigns to any sequence of observations x a prior probability proportional to Σ 2^(−ℓ(p)) summed over all programs p that, when run on a fixed universal Turing machine, produce x as a prefix of their output, where ℓ(p) is the length of p in bits. This is the algorithmic probability of x.

The induction rule is then the standard Bayesian one: update the algorithmic prior on each new observation. Solomonoff proved that this procedure converges to the true generating distribution for any computable source, and that it dominates every other prediction procedure up to a constant factor depending on the universal machine. In a precise sense, it is the optimal universal predictor.

Solomonoff induction is uncomputable, it requires summing over the halting set, which is undecidable, but provides a theoretical gold standard against which practical inductive systems can be measured. It subsumes Bayesian inference (the prior is universal), minimum-description-length learning (the most likely program is the shortest), Occam's razor (shorter explanations have higher prior probability), and Kolmogorov complexity in a single mathematical structure.

Marcus Hutter's AIXI (2000–2005) is the analogous universal agent: at each step it selects the action that maximises expected future reward under the Solomonoff prior over environments. AIXI is also uncomputable but has spawned computable approximations and is influential in theoretical AI safety.

Related terms: ray-solomonoff, Bayesian Inference

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).