Link for Live Seminar
Link for Recorded seminars ? 2020/2021 school year
Neural networks have become increasingly effective at many difficult machine-learning tasks. However, the nonlinear and large-scale nature of neural networks makes them hard to analyze and, therefore, they are mostly used as black-box models without formal guarantees. This issue becomes even more complicated when DNNs are used in learning-enabled closed-loop systems, where a small perturbation can substantially impact the system being controlled. Therefore, it is of utmost importance to develop tools that can provide useful certificates of stability, safety, and robustness for DNN-driven systems.
In this talk, we present a convex optimization framework that can address several problems regarding deep neural networks. The main idea is to abstract hard-to-analyze components of a DNN (e.g., the nonlinear activation functions) with the formalism of quadratic constraints. This abstraction allows us to reason aboutvarious properties of DNNs (safety, robustness, stability in closed-loop settings, etc.) via semidefinite programming.
Mahyar Fazlyab will join the Department of Electrical and Computer Engineering as an assistant professor in July 2021. Currently, he is an assistant research professor at the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University (JHU). Before that, Mahyar received his Ph.D. in Electrical and Systems Engineering (ESE) from the University of Pennsylvania (UPenn) in 2018, with a dual MA’s degree in Statistics from the Wharton School. He was also a postdoctoral fellow in the ESE Department at UPenn from 2018 to 2020. Mahyar’s research interests are at the intersection of optimization, control, and machine learning. His current research focus ison the safety and stability of learning-enabled autonomous systems. Mahyarwon the Joseph and Rosaline Wolf Best Doctoral Dissertation Award in 2019, awarded by the Department of Electrical and Systems Engineering at the University of Pennsylvania.