MIT-Harvard Communications Information Networks Circuits and Signals (CINCS) / Hamilton Institute Seminar

Title: When Does Deep Learning Succeed (and Fail)?  Robustness, Interpretability and Fairness in Deep Learning

Speaker: Soheil Feizi

Abstract: In the last couple of years, a lot of progress has been made in understanding various fundamental aspects of deep models. A key question is how to measure success in deep learning. A classical answer to this question is to evaluate the performance of trained models in the test set. However, it has been shown that this measure, although important, does not tell the whole story: models with an impressive test set accuracy can be extremely fragile against natural or adversarial noise, can catastrophically suffer from poor interpretability or can produce biased and unfair outcomes. In this talk, I will explain some success and failure tales of deep models by characterizing their intertwined aspects of robustness, interpretability and fairness. I will then present solutions to provably mitigate these multifaceted issues in deep models. 

Bio: Soheil Feizi is an assistant professor in the Computer Science Department at University of Maryland, College Park. Before joining UMD, he was a post-doctoral research scholar at Stanford University. He received his Ph.D. from Massachusetts Institute of Technology (MIT). He has received the NSF CAREER award in 2020 and the Simons-Berkeley Research Fellowship on deep learning foundations in 2019. He is the 2020 recipient of the AWS Machine Learning Research award, and the 2019 recipients of the IBM faculty award as well as the Qualcomm faculty award. He is the recipient of teaching award in Fall 2018 and Spring 2019 in the CS department at UMD. His work has received the best paper award of IEEE Transactions on Network Science and Engineering, over a three-year period of 2017-2019. He received the Ernst Guillemin award for his M.Sc. thesis, as well as the Jacobs Presidential Fellowship and the EECS Great Educators Fellowship at MIT.