November 1, 2019

12:00 pm / 1:15 pm


Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218

Spoken communication is basic to who we are. Neurological conditions that result in loss of speech can be devastating for affected patients. This talk will summarize recent efforts in decoding neural activity directly from the surface of the speech cortex during fluent speech production, monitored using intracranial Electrocorticography (ECoG). Decoding speech from neural activity is challenging because speaking requires very precise and rapid multi-dimensional control of vocal tract articulators. I will first describe the articulatory encoding characteristics in the speech motor cortex and compare them against other representations like the phonemes. I will then describe deep learning approaches to convertneural activity into these articulatory physiological signals that can then be transformed into audible speech acoustics or decoded to text. We show that such biomimetic strategies make optimal use of available data; generalize well across subjects, and also perform silent speech decoding. These results set a new benchmark in the development of Brain-Computer Interfaces for assistive communication in paralyzed individuals with intact cortical function.
Gopala Anumanchipalli, PhD is an associate researcher at the Department of Neurological Surgery at University of California, San Francisco. His research is in understanding neural mechanisms of human speech production towards developing next generation Brain-Computer Interfaces.  Gopala was a postdoctoral fellow at UCSF working with Edward F Chang, MD and has previously received PhD in Language and InformationTechnologies from Carnegie Mellon University working with Prof. Alan Black on speech synthesis.