Abstract. Recasts boosting as gradient descent in function space, where each new weak learner is fit to the negative gradient of the loss. The framework generalises AdaBoost to arbitrary differentiable losses and underpins modern gradient-boosted trees.
Tags:ensemblesboostinggradient-boosting
This site is currently in Beta. Contact: Chris Paton