References

Greedy function approximation: A gradient boosting machine.

Jerome H. Friedman (2001)

The Annals of Statistics, 29(5).

DOI: https://doi.org/10.1214/aos/1013203451

Abstract. Recasts boosting as gradient descent in function space, where each new weak learner is fit to the negative gradient of the loss. The framework generalises AdaBoost to arbitrary differentiable losses and underpins modern gradient-boosted trees.

Tags: ensembles boosting gradient-boosting

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).