References

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Sergey Ioffe & Christian Szegedy (2015)

arXiv.

DOI: https://doi.org/10.48550/arxiv.1502.03167

Abstract. Introduces batch normalisation, which normalises layer inputs over each mini-batch to have zero mean and unit variance. The technique dramatically accelerates deep network training and is originally motivated by the goal of reducing 'internal covariate shift'.

Tags: regularisation batch-normalisation training

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).