March 14, 2017

1:30 pm / 2:30 pm

Venue

Clark 314

Stochastic approximation for representation learning

Abstract: Unsupervised learning of useful features, or representations, is one of the most basic challenges of machine learning. Unsupervised representation learning techniques capitalize on unlabeled data which is often cheap and abundant and sometimes virtually unlimited. The goal of these ubiquitous techniques is to learn a representation that reveals intrinsic low-dimensional structure in data, disentangles underlying factors of variation, and is useful across multiple tasks and domains. This talk will focus on new theory and methods for large-scale representation learning. We will motivate a stochastic optimization view of representation learning in a big data setting rather than thinking of them as dimensionality reduction techniques for a given fixed dataset. We will put forth a mathematical definition of unsupervised learning, lay down different objectives employed for unsupervised representation learning, and describe stochastic approximation algorithms they admit. Time permitting, we will discuss applications to speech and language processing, social media analytics, and to healthcare.

RamanArora (http://www.cs.jhu.edu/~raman) is an assistant professor in the Department of Computer Science at Johns Hopkins University (JHU), where he hasbeen since 2014. He is affiliated with the Center of Language and Speech Processing (CLSP) and the Institute for Data Intensive Engineering and Science (IDIES). Prior to joining JHU, he was a Research Assistant Professor atToyota Technological Institute at Chicago (TTIC), a visiting researcher at Microsoft Research (MSR) Redmond, and a postdoctoral scholar at the University of Washington in Seattle. He received his M.S. and Ph.D. degrees from the University of Wisconsin-Madison. His research interests lie at the interface of machine learning and stochastic optimization, with emphasis on representation learning techniques including subspace learning, multiview learning, deep learning, and spectral learning. Central to his research is the theory and application of stochastic approximation algorithms that can scale to big data.

https://www.cs.jhu.edu/~raman/Home.html