References

Understanding the Difficulty of Training Deep Feedforward Neural Networks

Xavier Glorot & Yoshua Bengio (2010)

Proceedings of AISTATS 2010, 249-256.

URL: https://proceedings.mlr.press/v9/glorot10a.html

Abstract. Analyses how activation and gradient variance propagates through deep networks and proposes the Xavier (Glorot) initialisation scheme that preserves variance across layers, making training of deeper networks substantially easier.

Tags: neural-networks initialisation training url-only

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).