MIT Tech Review: A radical new neural network design could overcome challenges in AI

December 13, 2018 by Dee Keilholz

David Duvenaud, an assistant professor at the Departments of Statistical Sciences and Computer Science, and his team borrowed equations from calculus to redesign the core machinery of deep learning.


 

A radical new neural network design could overcome big challenges in AI

published in MIT Technology Review | by Karen Hao, December 12, 2018

 

Researchers borrowed equations from calculus to redesign the core machinery of deep learning so it can model continuous processes like changes in health.

 

David Duvenaud was working on a project involving medical data when he hit upon a major shortcoming in AI.

An AI researcher at the University of Toronto, he wanted to build a deep-learning model that would predict a patient’s health over time. But data from medical records is kind of messy: throughout your life, you might visit the doctor at different times for different reasons, generating a smattering of measurements at arbitrary intervals. A traditional neural network struggles to handle this. Its design requires it to learn from data with clear stages of observation. Thus it is a poor tool for modeling continuous processes, especially ones that are measured irregularly over time.

Read the full story on MIT Technology Review.