May 4, 2018

12:00 pm / 1:15 pm

Venue

Hackerman Hall B17 @ 3400 N Charles St, Baltimore, MD 21218, USA

Abstract

Languageuse varies depending on context, which is reflected in a variety of factors, including topic, location, communication mode, social context or objective, and individual style. Differences are often so great that textfrom one domain is often found to be useless for another, but training separate language models for different contexts is not efficient. In this talk, we introduce a new mechanism for using context to control a recurrent neural network (RNN) language model, where context can include a variety of variables, both continuous and discrete. The approach builds on the idea of using a context embedding as input to an RNN, but uses the context vector control a low-rank transformation of the recurrent layer weight matrix, inspired by multi-factor low-rank loglinear language models. Experiments show performance gains for several different types of context usingmultiple data sets and different tasks, and the models are computationally efficient compared to alternative approaches.
Biography

Mari Ostendorf is an Endowed Professor of System Design Methodologies and Associate Vice Provost for Research at the University of Washington. She received her PhD in electrical engineering from Stanford University, then joined BBN Laboratories and later moved to Boston University. She joined the Electrical Engineering Department at theUniversity of Washington in 1999, and currently also holds Adjunct appointments in Linguistics and in Computer Science and Engineering. From 2009-2012, she served as the Associate Dean for Research and Graduate Studies in the College of Engineering. She has been a visiting researcher at the ATR Interpreting Telecommunications Laboratory and the University of Karlsruhe, a Scottish Informatics and Computer Science Alliance Distinguished Visiting Fellow, and an Australia-America Fulbright Scholar at Macquarie University. Prof. Ostendorf’s research interests are in dynamic statisticalmodels for speech and language processing. Her work seeks to integrate acoustic, prosodic and language cues for both speech understanding and generation, and to leverage similarities of spontaneous speech and informal text in data-driven learning. Her work has resulted in over 270 publications and 2 paper awards. She has served as an Editor of the IEEE Transactionson Audio, Speech and Language Processing and Computer Speech and Language, as the VP Publications for the IEEE Signal Processing Society, as a member of the IEEE Periodicals Committee, and on several other IEEE committees. Prof. Ostendorf is a Fellow of the IEEE and ISCA, a recipient of the 2010 IEEE HP Harriett B. Rigas Award, a 2013-2014 IEEE Signal Processing Society Distinguished Lecturer, and a recipient of the 2018 IEEE James L. Flanagan Speech and Audio Processing Award.