Show simple item record

dc.contributor.authorHofer, Gregor
dc.contributor.authorShimodaira, Hiroshi
dc.contributor.authorYamagishi, Junichi
dc.date.accessioned2007-09-19T12:41:34Z
dc.date.available2007-09-19T12:41:34Z
dc.date.issued2007
dc.identifier.citationGregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Speech-driven head motion synthesis based on a trajectoy model. Poster at Siggraph 2007, 2007.en
dc.identifier.urihttp://hdl.handle.net/1842/2002
dc.description.abstractMaking human-like characters more natural and life-like requires more inventive approaches than current standard techniques such as synthesis using text features or triggers. In this poster we present a novel approach to automatically synthesise head motion based on speech features. Previous work has focused on frame wise modelling of motion [Busso et al. 2007] or has treated the speach data and motion data streams separately [Brand 1999], although the trajectories of the head motion and speech features are highly correlated and dynamically change over several frames. To model longer units of motion and speech and to reproduce their trajectories during synthesis, we utilise a promising time series stochastic model called ”Trajectory Hidden Markov Models” [Zen et al. 2007]. Its parameter generation algorithm can produce motion trajectories from sequences of units of motion and speech. These two kinds of data are simultaneously modelled by using a multistream type of the trajectory HMMs. The models can be viewed as a Kalman-smoother-like approach, and thereby are capable of producing smooth trajectories.en
dc.format.extent466141 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.subjectspeech technologyen
dc.titleSpeech-driven head motion synthesis based on a trajectory model.en
dc.typeConference Paperen


Files in this item

This item appears in the following Collection(s)

Show simple item record