Bren Professor atCaltech
Director of ML Research at NVIDIA
?Bridgingthe Gap Between Artificial and HumanIntelligence: Role of Feedback?
Abstract: Deeplearning has yielded impressive performance over the last few years. However,it is nomatch to human perception and reasoning. Recurrent feedback in thehuman brain is shown to be critical for robust perception. Feedback is able tocorrect the potential errors made by the feed-forward pathways using aninternal generative model of the world. The Bayesian brain hypothesis statesthat thehuman brain is carrying out Bayesian inference to obtain a consistentprediction. Deriving inspiration from this, we augment any existing neuralnetwork with feedback (NN-F). The feedback connections form a deconvolutionalgenerative model that is Bayes-consistent with the given feed-forward neuralnetwork. We demonstrate inherent robustness in NN-F without any access to noisyexamples, and further enhanced robustness when noisy examples are available.
In thesecond part of my talk, I will discuss some toolsto analyze feedbackmathematically, drawing from linear control theory. I will discuss a newreinforcement learning method that is able to achieve surprisingly low regret(logarithmic) using a combination of long-term and online learning. I will also discuss robust learning methodsthat can maintain safety and stability criteria, essential for feedback controlsystems. I will then discuss ways to bridge mathematical theory with real-worldrequirements.
Bio: Anima Anandkumar is aBren Professor at Caltech and Director of ML Research at NVIDIA. She waspreviously a Principal Scientist at Amazon Web Services. She has receivedseveral honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Younginvestigator awards from DoD, and Faculty Fellowships from Microsoft, Google,Facebook, and Adobe. She is part of the World Economic Forum’s Expert Network.She is passionate about designing principled AI algorithms and applying them ininterdisciplinary applications. Her research focus is on unsupervised AI,optimization, and tensor methods.