March 2, 2020

12:00 pm / 1:15 pm

Venue

Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218

Abstract
While deep learning produces supervised models with unprecedented predictive performance on many tasks, under typical training procedures, advantages over classical methods emerge only with large datasets. The extreme data-dependence of reinforcement learners may be even more problematic. Millions of experiences sampled from video-games come cheaply, but human-interacting systems can’t afford to waste so much labor.In this talk, I will discuss several efforts to increase the labor-efficiency of learning from human interactions. Specifically, I will cover work on learning dialogue policies, deep active learning for natural language processing, learning from noisy and singly-labeled data, and active learning with partial feedback. Finally, time permitting, I’ll discuss a new approach for reducing the reliance of NLP models on spurious associations in the data that relies on a new mechanism for interacting with annotators.
Biography
Zachary Chase Lipton is an assistant professor at Carnegie Mellon University appointed in both the Machine Learning Department and Tepper School of Business. His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data. He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks. Find on Twitter (@zacharylipton) or GitHub (@zackchase).