Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218
Adversarial machine learning research has been nearly obsessed with test-time attacks on image (and to a lesser degree, text)classification tasks. This talk examines two directions that broaden theanticipated threats. First, I discuss vulnerabilities in sequential machine learning such as multi-armed bandits and reinforcement learning. We will see how the attacker can manipulate the environment and force the learner into learning any target policy. The attacker can optimize such attacks by solving a control problem, or equivalently, a higher level reinforcement learning problem. Second, I revisit attacks on image classification and disprove a key assumption: small pixel p-norm manipulation implies humanly imperceptible attack. I will describe a human behavioral study demonstrating that pixel p-norm, earth mover’s distance, structural similarity index, and deep net embedding do not match human perception.
Jerry Zhu is the Sheldon & Marianne Lubar professor in the Departmentof Computer Sciences at the University of Wisconsin-Madison. Jerry received his Ph.D. from Carnegie Mellon University in 2005. His research interest is in machine learning, particularly machine teaching, adversarial learning, active learning and semi-supervised learning. He currently serves or has served the following roles: conference chair for AISTATS and CogSci, Action Editor of Machine Learning Journal, member of DARPA ISAT advisory group. He is a recipient of a NSF CAREER Award, and winner of multiplebest paper awards including an ICML classic paper prize in 2013.