Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218
In this talk, I will focus on the recent efforts atASRU 2019, investigating speech recognition for Mandarin and English mixed speech. With burgeoning advancements in transportation and communication, many cultures find themselves becoming intertwined. Given that Englishhas become the default global communication language, codemix with English is a common phenomenon for non-English spoken languages. While it mightbe easy for humans to understand codemixed utterances, the recognition accuracy by machines still suffers greatly when the spoken language changesin the middle of an utterance.
In this ASRU effort, researchers aim at improving recognition accuracy from the traditional HMM-DNN hybrid approach to the recent end-to-end methodology, which gets rid of phonetic pronunciation dictionaries completely. This exercise gives the community a common ground to understand where we are now and what to do next, at least forMandarin-English codemix. More research and more data collection are needed to achieve comparable performance to monolingual recognition. At Mobvoi, we hope this study will allow us to develop a satisfactory speech inputfor scenarios where two or more spoken languages are mixed, such as Cantonese, Mandarin and English, as commonly seen in areas like Hong Kong.
Mei-Yuh received her PhD in Computer Science from Carnegie Mellon University in 1993 and worked at Microsoft Seattle and China for 18 years, publishing numerous conference and journal papers, and more importantly delivering industry-standard products in speech recognition, Bing machine translation, and Cortana language understanding. She is an IEEE fellow, and is passionate in bridging the gap between academia and industry. She is currently the director of Mobvoi AI Lab in Seattle WA.