Limitations of Shallow Neural Networks

Datum konání: 10.03.2017
Přednášející: Věra Kůrková
Odpovědná osoba: Kotera

Recent success of deep networks poses a theoretical question: When are deep nets provably better than the shallow ones? Using probabilistic and geometric properties of high-dimensional spaces we show that for most common types of computational units, almost any uniformly randomly chosen function on a sufficiently large domain cannot be computed by a reasonably sparse shallow network. We also discuss connections with the No Free Lunch Theorem, with the central paradox of coding theory, and with pseudo-noise sequences.