CNEL Seminar: Firas Laakom

“Feature Diversity Regularization for Neural Networks”
Wednesday, March 1 at 3:00pm
NEB 409

Presented by the Computational NeuroEngineering Laboratory

Abstract

Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this line of work, we propose to complement this traditional ‘between-layer’ feedback with additional ‘within-layer’ feedback to encourage the diversity of the activations within the same layer. We present an extensive empirical study confirming that the proposed regularizer enhances the performance of several state-of-the-art neural network models in multiple tasks.

Moreover, we compliment this empirical study with a theoretical analysis. We theoretically investigate how learning non-redundant distinct features affects the generalization of the neural network. We derive novel bounds depending on feature diversity based on Rademacher complexity for such networks. Our analysis proves that more distinct features at the network’s hidden units within the intermediate layer leads to better generalization. Then, we show how to extend our empirical and theoretical analysis beyond supervised learning with neural network, using Energy-based model framework.

Biography

Firas Laakom is a doctoral student at Tampere University, Finland. He received his engineering degree from Tunisia Polytechnic School (TPS) in 2018. He has co-authored 4 international journal papers and 9 papers in international conferences \& workshops. His research interests include computer vision and statistical learning theory.