September 7, 2021

12:00 pm / 1:00 pm

Venue

https://wse.zoom.us/j/95092468248?pwd=aGhBalo4aGFybDJaMFpZVFg4NXdIQT09

Recorded Seminar link: https://wse.zoom.us/rec/play/1SGmTL3HT_LVw39oRcD5yfmagEEOiQ4ErfkEGRo7aYGC90NSGwbmuLMCJSf2MW04GAUCG2-qYgSicCE1.wn6Ebh0wNiKqdv-C?startTime=1631030355000&_x_zm_rtaid=PP4fIybxTayNnZ6b19hpsA.1632842736978.4df5c32815f94f45bb3f09ff8fad339d&_x_zm_rhtaid=584

Praneeth Netrapalli

Title: ?Pitfalls of Deep Learning??

Abstract:Whiledeep neural networks have achieved large gains inperformance on benchmarkdatasets, their performance often degrades drastically with changes in datadistribution encountered during real-world deployment. In this work, throughsystematic experiments and theoretical analysis, we attempt to understand thekey reasons behind such brittleness of neural networks in real-world settingsand why fixing these issues is exciting but challenging.

Wefirst hypothesize, and through empirical+theoretical studies demonstrate, that(i) neural network training exhibits “simplicity bias” (SB), wherethe models learn only the simplest discriminative features and (ii) SB is oneof the key reasons behind non-robustness of neural networks. A natural way tofix SB in trained models is  by identifying the discriminative featuresused by the model and learning new features ?orthogonal? to the learnedfeature.

Post-hocgradient-based attribution methods are regularly used to identify the keydiscriminative features for a model. But, due to lack of ground truth, athorough evaluation of even the most basic input gradient attribution method isstill missing in literature. Our second contribution is to overcome thischallenge through experiments and theory on real and designed datasets. Ourresults demonstrate that (i) input gradient attribution does NOT highlightcorrect features on standard models (i.e., trained on original data) butsurprisingly, it does highlight correct features on adversarially trainedmodels (i.e., trained using adversarial training) and (ii) “featureleakage”, which refers to the phenomenon wherein, given an instance, itsinput gradients highlight the location of discriminative features in the giveninstance as well as in other instances that are present in the dataset, is thereason behind why input gradient attribution fails for standard models.

Ourwork raises more questions than it answers, so we will end with interestingdirections for future work.

Bio: PraneethNetrapalli is a research scientist at Google Research India, Bengaluru. He isalso an adjunct professor at TIFR, Mumbai and a faculty associate of ICTS,Bengaluru. Prior to this, he was a researcherat Microsoft Research. Heobtained MS and PhD in ECE from UT Austin, and B-Tech in EE from IIT Bombay. Heis a co-recipient of IEEE Signal Processing Society Best Paper Award 2019 andis an associate of Indian Academy of Sciences (IASc). His research interestsare broadly in stochastic and nonconvex optimization, minimax/game theoreticoptimization and designing reliable androbust machine learning algorithms.