Schoolof Electrical,Computer,and Energy Engineering
Arizona State University
“Alpha-loss:A Tunable Class of Loss Functions for Robust Learning”
Abstract: In this talk, we introducealpha-loss asa parameterized class of loss functions that resulted fromoperationally motivating information-theoretic measures. Tuning the parameteralpha from 0 to infinity yields a class of loss functions that admit continuousinterpolation between log-loss (alpha=1), exponential loss (alpha=1/2),and 0-1 loss (alpha=infinity). We discuss how different regimes of theparameter alpha enables the practitioner to tune the sensitivity of theiralgorithm towards two emerging challenges in learning: robustness and fairness.We discuss classification properties of the class, information-theoreticinterpretations, and the optimization landscape of the average lossasviewed through the lens of Strict-Local-Quasi-Convexity under the logisticregression model. Finally, we comment on ongoing and future work on differentapplications of alpha-loss including deep neural networks, federated learning,and boosting.
Bio: Lalitha Sankar isan Associate Professor in the School of Electrical, Computer, andEnergy Engineering at ArizonaState University. Shereceived her doctorate from Rutgers University, hermasters from the University of Maryland and her bachelor’s degree from theIndian Institute of Technology, Bombay. Her research is at the intersection ofinformation theory and learning theory and also its applications to theelectric grid. Her work has dominantly focused on identifying meaningfulmetrics for information privacy and algorithmic fairness; the talk today is aresult of her broader privacy work. She received the NSF CAREER award in 2014and currently leads an NSF-and Google-funded effort on using learningtechniques to assess COVID-19 exposure risk in a secure and privacy-preservingmanner.