# Blair Bilodeau

Current Job: Graduate Researcher, Vector Institute

Undergraduate Degree: Financial Modelling (major), Math (minor), Western University

BSc Cohort: 2018

PhD (expected): 2023 in Statistics

Homepage

⤙⟡⤚

### After completing his undergraduate degree in financial math at Western University, his passion for theoretical machine learning drew Blair to UofT’s PhD Program in the Department of Statistical Sciences. Currently a graduate researcher at the Vector Institute, Blair’s work has allowed him to further explore new algorithms for sequential prediction, as well as his interests on the relationship between machine learning and statistics.

U of T Statistical Sciences: What are you currently working on? Tell us a little bit about some of your current projects and the work that you do.

My interests are broadly theoretical machine learning. When we’re interested in making decisions based on predictions, we want to go back to what statistics is all about, which is making inference and understanding implications of what it means to look at the randomness in our data. So to do that, we want to have some sort of theoretical guarantees about how these algorithms work, when they’re going to work, and, maybe even more importantly, when they are not going to work.

The specific area that I’m working on right now is something called sequential predictions. It’s a little different from a traditional statistical framework, where you would have a bunch of data, analyze it and then create a model from that. Sequential prediction is when you receive data in an ongoing manner, and you, sort of, update your beliefs about the world with each new input you receive.

What specific areas of work are you interested in?

What we’re interested in is seeing how we can control regret bounds for loss functions which don’t satisfy commonly assumed properties. The big one of those is something called the Lipschitz property, which means that the function can’t be too steep at any point when you’re making predictions. So, that’s what I’m currently working on. The way that we want to extend that and build on some work that people have done is we’re interested in getting to these regret bounds that adapt to whatever data you observe.

Lots of people do interesting work in writing new algorithms that work well on their experimental data sets and they arrive at some intuition and get really good empirical results. And sometimes they have some theory; this is what motivates them to write the algorithm that way. But very often, they don’t have guarantees on how well the algorithm is going to perform on some type of data before you run it on the data set. This is what I’m working on.

U of T appealed to me as a PhD choice because I’m interested in theory, and they offer courses that really dig deep into theory.

What was your undergrad like? Did you go straight from a BA to a PhD or did you do a master’s in between?

I did a BSc at Western University in Financial Math. My undergraduate course work was in both applied math and statistics, and I also had a minor in pure math. After that, I looked directly into PhD programs. I didn’t do a master’s program mainly because I was also applying to U.S. schools where you don’t usually do a masters before you do a PhD. I just wanted to apply to the same level of programs across the board.

UofT appealed to me as a PhD choice because I’m interested in theory, and they offer courses that really dig deep into theory. A lot of students aren’t that interested in very theoretical statistics, so many programs make it easier to sidestep that as you’re going through the program. I liked that UofT actually encouraged theory. With their theoretical stats course and their probability course being extremely theoretical, alongside comprehensive exams that were on theory, the program seemed really well-suited to what I was interested in.

And the other thing is that they have a direct-entry PhD program, which a lot of schools in Canada don’t have.

If you want to [...] make decisions that are actually impacting [...] humans, you want to have a good understanding of 'is this algorithm actually working well or not?' My research is a step towards understanding that.

What makes statistics essential to machine learning and AI (artificial intelligence)?

It is fundamental. Machine learning is, by definition, predicting things about the world, which are random or have randomness to them, which is the field of statistics. Machine learning likes to view things in terms of prediction and statistics like to do things in terms of inference, understanding things about your data. They are basically two sides of the same coin. So, to not have statistics in there when you’re making predictions, is what leads to people making predictions with no understanding of how good their predictions are.

What would you say is the greater impact of your research in the field of statistics?

I guess the impact of it is making it possible for people to understand how these algorithms work a little better and maybe get a more fundamental idea of what type of data algorithms work well on and, more importantly, what they don’t work well on.

If you want to apply an algorithm to something or you want to make decisions that are actually impacting things that are going to happen to humans, you want to have a good understanding of “is this algorithm actually working well or not?” My research is a step towards understanding that.

Interested in pursuing a PhD in Statistics?