Deep Learning without Poor Local Minima Kawaguchi, Kenji In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.
from Computer Science and Artificial Intelligence Lab (CSAIL) http://ift.tt/1Z2DjrM
Home » Computer Science and Artificial Intelligence Lab (CSAIL) » Deep Learning without Poor Local Minima
jeudi 26 mai 2016
Deep Learning without Poor Local Minima
lainnya dari Computer Science and Artificial Intelligence CSAIL, Computer Science and Artificial Intelligence Lab (CSAIL)
Ditulis Oleh : Unknown // 06:58
Kategori:
Computer Science and Artificial Intelligence Lab (CSAIL)
Inscription à :
Publier les commentaires (Atom)
0 commentaires:
Enregistrer un commentaire