Show simple item record

Proc. of the ACL 2010 System Demonstrations

dc.contributor.authorKurimo, Mikkoen
dc.contributor.authorByrne, Williamen
dc.contributor.authorDines, Johnen
dc.contributor.authorGarner, Philip N.en
dc.contributor.authorGibson, Matthewen
dc.contributor.authorGuan, Yongen
dc.contributor.authorHirsimaki, Teemuen
dc.contributor.authorKarhila, Reimaen
dc.contributor.authorKing, Simonen
dc.contributor.authorLiang, Huien
dc.contributor.authorOura, Keiichiroen
dc.contributor.authorSaheer, Lakshmien
dc.contributor.authorShannon, Matten
dc.contributor.authorShiota, Sayakaen
dc.contributor.authorTian, Jileien
dc.contributor.authorTokuda, Keiichien
dc.contributor.authorWester, Mirjamen
dc.contributor.authorWu, Yi-Jianen
dc.contributor.authorYamagishi, Junichien
dc.date.accessioned2010-12-22T11:34:53Z
dc.date.available2010-12-22T11:34:53Z
dc.date.issued2010
dc.identifier.urihttp://hdl.handle.net/1842/4566
dc.description.abstractIn the EMIME project we have studied unsupervised cross-lingual speaker adaptation. We have employed an HMM statistical framework for both speech recognition and synthesis which provides transformation mechanisms to adapt the synthesized voice in TTS (text-to-speech) using the recognized voice in ASR (automatic speech recognition). An important application for this research is personalised speech-to-speech translation that will use the voice of the speaker in the input language to utter the translated sentences in the output language. In mobile environments this enhances the users' interaction across language barriers by making the output speech sound more like the original speaker's way of speaking, even if she or he could not speak the output language.en
dc.titlePersonalising speech-to-speech translation in the EMIME projecten
dc.typeConference Paperen
rps.titleProc. of the ACL 2010 System Demonstrationsen
dc.date.updated2010-12-22T11:34:54Z


Files in this item

This item appears in the following Collection(s)

Show simple item record