May 26, 2020

12:00 pm / 1:00 pm

Venue

ZOOM Meeting

Seminar Recording: 

Please join us for the MINDS & CIS Seminar Series

Tuesday, May 26, 2020 at 12:00 pm Eastern Time (US and Canada)

?Non-Parallel Emotion Conversion in Speech via VariationalCycle-GAN,? presented by RaviShankar (ECE, JHU)

and

?A Regularization view of Dropout in Neural Networks,? presented by AmbarPal (CS, JHU)

Seminar will be remote viaZoom

Join Zoom Meeting
https://wse.zoom.us/j/91518303197?pwd=WnoxVjFhUWVsVElWdkZKcTVMVXpHZz09

MeetingID: 915 1830 3197

Password: clark_hall

 Talk1: ?Non-Parallel Emotion Conversion in Speech via VariationalCycle-GAN,? by Ravi Shankar(ECE, JHU)

Abstract – The quality of speechsynthesis has witnessed a tremendous improvement in the recent past, owingmostly to the ability of training deep neural networks. The availability oflarge amount of transcribed data coupled with efficient representation of thelinguistic features have played a key role in this regard.While the currentstate-of-the-art models can generate emotionally neutral speech with ease,injecting an emotional style still remains an open challenge. Further, speechsynthesis is an autoregressive task which makes it slow and computationallycumbersome. In this work, we will look at how converting the emotion in speechdirectly can provide us a better alternative when the resources are limited.Specifically, we propose an unsupervised framework which converts theunderlying emotion of a speech utterance by exploitingrelationship between thefeature representations. We further propose a new variant of the cycle-GAN thatentagles the generators globally by minimizingKL divergence between the inputand output distributions. We demonstrate that our method generalizes to unseenspeakers as well.

 

Bio – Ravi is a third year PhD studentin the Department of Electrical and Computer Engineering under the supervisionof Dr. Archana Venkataraman,and he is a MINDS Data Science Fellow. His worksfocuses on the problem of emotion conversion in speech. His researchinterests include speech processing, signal processing and unsupervised machinelearning.

 

Talk 2: ?A Regularization view of Dropout in Neural Networks,? by Ambar Pal (CS, JHU)

Abstract – Dropout is a populartraining technique used to improve the performance of Neural Networks. However,a complete understanding of the theoretical underpinnings behind thissuccess remains elusive. In this talk, we will take a regularization view of explainingthe empirically observed properties of Dropout. In the first part, we willinvestigate the case of a single layer linear neural network with Dropoutapplied to the hidden layer, and observe how the Dropout algorithm can beseen as an instance of Gradient Descent applied to a changing objective.Then we will understand how training with Dropout can be seen to beequivalent to adding a regularizer to the original network. With these toolswe would be able to show that Dropout is equivalent to a nuclear-normregularized problem, where the nuclear-norm is taken on the product of theweight matrices of the network.
Inspired by the success of Dropout, several variants have been proposedrecently in the community. In the second part of the talk, we will analyzesome of these variants (DropBlock and DropConnect), and obtain theoreticalreasons for their success over vanilla Dropout. Finally, we willend witha unified theory to analyze Dropout variants, and understand someof theimplications.

Bio – Ambar is a PhD student inthe Computer Science Department at the Johns Hopkins University.Ambar isadvised by René Vidal, and is affiliated withthe Mathematical Institute for Data Science and the VisionLab at JHU. Previously he obtained his Bachelor’s degree in ComputerScience from IIIT Delhi. His current research interest lies in the theoryof deep learning, specifically, trying to theoretically understand theproperties induced by common deep learning techniques on the optimizationof deep architectures. He is currently working on understanding theregularization properties induced by common tricks used in training DNNs.He has a secondary interest in understanding adversarial examples generatedfor computer vision systems. He is a MINDS Data Science Fellow and hisresearchhas been supported by the IARPA DIVA and the DARPA GARD grants.