1926–2009, Mathematician; founder of algorithmic information theory
Also known as: Ray J. Solomonoff
Ray Solomonoff was an American mathematician who, in two technical reports of 1960 and a 1964 Information and Control paper A Formal Theory of Inductive Inference, founded algorithmic probability, the framework of assigning to any sequence of observations a probability proportional to the inverse exponential of the length of the shortest program that produces it on a universal Turing machine. The framework is the basis of Solomonoff induction, an idealised theory of universal inductive inference that subsumes Bayesian inference, minimum-description-length learning and Kolmogorov complexity in a single mathematical structure.
Solomonoff was an attendee of the 1956 Dartmouth workshop and one of the few participants who arrived with a pre-existing research programme on machine learning. His work was independent of, and slightly earlier than, Andrey Kolmogorov's and Gregory Chaitin's parallel discoveries of the algorithmic complexity of strings; the three converging streams now constitute algorithmic information theory.
Solomonoff induction is uncomputable in general, it requires a sum over all programs that halt on a given output, and the halting set is undecidable, but provides a theoretical gold standard against which practical inductive systems can be measured. Marcus Hutter's AIXI model of universal artificial intelligence (2000–2005) is built directly on Solomonoff's foundations.
Video
Related people: Claude Shannon, Alan Turing
Discussed in:
- Chapter 1: What Is AI?, A Brief History of AI