Abstract. Strengthens the universal approximation theorem by showing that any non-polynomial activation function suffices, removing the boundedness assumption and encompassing the later ReLU nonlinearity.
Tags:neural-networkstheoryuniversal-approximation
This site is currently in Beta. Contact: Chris Paton