Abstract
We present a provably efficient, layer-by-layer, algorithm for training deep sum-product neural networks. Our algorithm comes with formal polynomial time convergence guarantees. Similar to other deep architectures, the resulting network can compactly represent any function on a finite training set. We also discuss provably efficient algorithms for training other types of neural networks.