Abstract
Deep neural networks are typically initialized with random weights, with variances chosen to facilitate signal propagation and stable gradients. It is commonly believed that diversity of features is an important property of these initializations. We construct a deep convolutional network with identical features by initializing almost all the weights to 0. The architecture also enables perfect signal propagation and stable gradients, and achieves high accuracy on standard benchmarks. This indicates that random, diverse initializations are unnecessary for training neural networks. An essential element in training this network is a mechanism of symmetry breaking; we study this phenomenon and find that standard GPU operations, which are non-deterministic, can serve as a sufficient source of symmetry breaking to enable training.
This is joint work with Yaniv Blumenfeld and Dar Gilboa.
ICML paper: https://arxiv.org/abs/2007.01038