A Top-Down Approach Toward Understanding Deep Learning

When and Where

Thursday, October 21, 2021 3:30 pm to 4:30 pm
Online

Speakers

Weijie Su, University of Pennsylvania

Description

The remarkable development of deep learning over the past decade relies heavily on sophisticated heuristics and tricks. To better exploit its potential in the coming decade, perhaps a rigorous framework for reasoning deep learning is needed, which however is not easy to build due to the intricate details of modern neural networks. For near-term purposes, a practical alternative is to develop a mathematically tractable surrogate model that yet maintains many characteristics of deep learning models.
This talk introduces a model of this kind as a tool toward understanding deep learning. The effectiveness of this model, which we term the Layer-Peeled Model, is evidenced by two use cases. First, we use this model to explain an empirical pattern of deep learning recently discovered by David Donoho and his students. Moreover, this model predicts a hitherto unknown phenomenon that we term Minority Collapse in deep learning training. This is based on joint work with Cong Fang, Hangfeng He, and Qi Long.

Please join the event.

About Weijie Su

Picture of Weijie Su Weijie Su is an Assistant Professor in the Department of Statistics and Data Science of The Wharton School and the Department of Computer and Information Science, at the University of Pennsylvania. He is a co-director of Penn Research in Machine Learning. Prior to joining Penn, he received his Ph.D. in statistics from Stanford University in 2016 and his bachelor's degree in mathematics from Peking University in 2011. His research interests span privacy-preserving data analysis, optimization, high-dimensional statistics, and deep learning theory. He is a recipient of the Stanford Theodore Anderson Dissertation Award in 2016, an NSF CAREER Award in 2019, and an Alfred Sloan Research Fellowship in 2020.