5.17 Closing, Statistics as the Substrate of AI

Every chapter that follows builds on the foundation laid here. When we discuss neural network optimisation in Chapter 9, we are choosing among numerical routes to the same MLE point estimate. When we discuss Bayesian deep learning in Chapter 14, we are approximating the posterior whose mode is that MAP estimate. When we discuss benchmarks and evaluations in Chapter 16, we are rediscovering the reporting standards that randomised clinical trials embraced decades ago. When we discuss safety and interpretability in Chapter 17, we are wrestling with the same identification problems that causal inference made formal.

If statistics is unfamiliar territory, the reward for fluency is profound: every algorithmic choice in modern AI becomes a recognisable instance of an old debate, and every new method becomes a tractable variation on a theme you have seen before. If statistics is familiar territory, AI becomes an inviting field full of variations on themes you already understand. The terminology will continue to evolve, "foundation models", "in-context learning", "prompt engineering", but the substance will remain what it has always been: estimating, evaluating, and reasoning under uncertainty from finite data.

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).