?Towards Trustworthy AI:Distributionally Robust Learning under Data Shift?
Anqi Liu, PhD
Johns Hopkins University
Abstract: The unprecedented prediction accuracy of modern machine learning beckons for its application in a wide range ofreal-world applications,including autonomous robots, fine-grained computervision, scientific experimental design, and many others. In order to create trustworthy AIsystems, we must safeguard machine learning methods fromcatastrophic failures and provide calibrated uncertainty estimates. For example, we must account for the uncertainty and guarantee the performance forsafety-critical systems, like autonomous driving and health care, beforedeploying them inthe real world. A key challenge in such real-worldapplications is that thetest cases are not well represented by thepre-collected training data. To properly leverage learning in such domains,especially safety-critical ones, we must go beyond the conventional learningparadigm of maximizingaverage prediction accuracy with generalizationguarantees that rely on strong distributional relationships between trainingand test examples. In thistalk, I will describe adistributionally robust learning framework that offers accurate uncertaintyestimation and rigorous guarantees under data distribution shift. Thisframework yields appropriately conservative yet still accurate predictions to guidereal-world decision-making and is easily integrated with modern deeplearning. I will showcase the practicality of this framework inapplications on agile robotic control and computer vision. I will alsointroduce a survey of other real-world applications that would benefit fromthis framework for future work.
Biography: Dr.Anqi (Angie) Liu is an Assistant Professor in the Department of ComputerScience of the Johns Hopkins University.
She is broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. Her research focuses on enabling the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in the interaction. She is particularly interested in high-stake applications that concern the safety and societal impact of AI. Previously, she completed her postdoc in the Department of Computing and Mathematical Sciences of the California Institute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chicago. She has been selected as the 2020 EECS Rising Stars. Her publications appear in machine learning conferences like NeurIPS, ICML, ICLR, AAAI, and AISTATS.