Show simple item record

Proc. of ICASSP

dc.contributor.authorOura, Keiichiroen
dc.contributor.authorTokuda, Keiichien
dc.contributor.authorYamagishi, Junichien
dc.contributor.authorWester, Mirjamen
dc.contributor.authorKing, Simonen
dc.date.accessioned2011-01-19T10:51:43Z
dc.date.available2011-01-19T10:51:43Z
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/1842/4657
dc.description.abstractIn the EMIME project, we are developing a mobile device that performs personalized speech-to-speech translation such that a user's spoken input in one language is used to produce spoken output in another language, while continuing to sound like the user's voice. We integrate two techniques, unsupervised adaptation for HMM-based TTS using a word-based large-vocabulary continuous speech recognizer and cross-lingual speaker adaptation for HMM-based TTS, into a single architecture. Thus, an unsupervised cross-lingual speaker adaptation system can be developed. Listening tests show very promising results, demonstrating that adapted voices sound similar to the target speaker and that differences between supervised and unsupervised cross-lingual speaker adaptation are small.en
dc.titleUnsupervised Cross-lingual Speaker Adaptation for HMM-based Speech Synthesisen
dc.typeConference Paperen
rps.titleProc. of ICASSPen
dc.date.updated2011-01-19T10:51:44Z


Files in this item

This item appears in the following Collection(s)

Show simple item record