References

Multilayer feedforward networks with a nonpolynomial activation function can approximate any function

Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, & Shimon Schocken (1993)

Neural Networks, 6(6), 861-867.

DOI: https://doi.org/10.1016/s0893-6080(05)80131-5

Abstract. Strengthens the universal approximation theorem by showing that any non-polynomial activation function suffices, removing the boundedness assumption and encompassing the later ReLU nonlinearity.

Tags: neural-networks theory universal-approximation

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).