Nonparametrically Learning Activation Functions in Deep Neural Nets

An activation function learned on CIFAR-10

Abstract

We provide a principled framework for non-parametrically learning activation functions in deep neural networks. Currently, state-of-the-art deep networks treat choice of activation function as a hyper-parameter before training. By allowing activation functions to be estimated as part of the training procedure, we expand the class of functions that each node in the network can learn. Building on recent advances in stability bounds for stochastic gradient methods, we provide a theoretical justification for our choice of nonparametric activation functions and demonstrate that networks with our nonparametric activation functions generalize well. To demonstrate the power of our novel techniques, we test them on image recognition datasets and achieve up to a 15% relative increase in test performance compared to the baseline.

Publication
Technical Report