Rethinking the Role of Optimization in Learning

When and Where

Thursday, November 12, 2020 3:30 pm to 4:30 pm
Online
Zoom, Passcode: 666317

Speakers

Suriya Gunasekar, Senior Researcher, Microsoft Research at Redmond

Description

In this talk, I will overview recent results towards understanding how we learn large capacity machine learning models. In the modern practice of machine learning, especially deep learning, many successful models have far more trainable parameters compared to the number of training examples leading to ill-posed optimization objectives. In practice though, when such ill-posed objectives are minimized using local search algorithms like (stochastic) gradient descent ((S)GD), the "special" minimizers returned by these algorithms have remarkably good performance on new examples. In this talk, we will explore the role optimization algorithms like (S)GD in learning overparameterized models focusing on the simpler setting of learning linear predictors.

Please join the event.

About Suriya Gunasekar

Suriya Gunasekar is a Senior Researcher in the Machine Learning Foundations group at Microsoft Research at Redmond. Prior to joining MSR, she was a Research Assistant Professor at Toyota Technological Institute at Chicago. She received her PhD in Electrical and Computer Engineering from The University of Texas at Austin.