Geoffrey Hinton, Oriol Vinyals, & Jeff Dean (2015)
arXiv.
DOI: https://doi.org/10.48550/arxiv.1503.02531
Abstract. Introduces knowledge distillation, which trains a smaller 'student' model to match the softened output distribution of a larger 'teacher' model. The temperature parameter exposes rich information about relative class similarities, enabling effective model compression.
Tags: efficiency distillation