Edinburgh Research Archive logo

Edinburgh Research Archive

University of Edinburgh homecrest
View Item 
  •   ERA Home
  • Centre for Speech Technology Research
  • CSTR publications
  • View Item
  •   ERA Home
  • Centre for Speech Technology Research
  • CSTR publications
  • View Item
  • Login
JavaScript is disabled for your browser. Some features of this site may not work without it.

Lip motion synthesis using a context dependent trajectory hidden Markov model

View/Open
sca07.pdf (98.08Kb)
Date
2007
Author
Hofer, Gregor
Shimodaira, Hiroshi
Yamagishi, Junichi
Metadata
Show full item record
Abstract
Lip synchronisation is essential to make character animation believeable. In this poster we present a novel technique to automatically synthesise lip motion trajectories given some text and speech. Our work distinguishes itself from other work by not using visemes (visual counterparts of phonemes). The lip motion trajectories are directly modelled using a time series stochastic model called ”Trajectory Hidden Markov Model”. Its parameter generation algorithm can produce motion trajectories that are used to drive control points on the lips directly.
URI
http://hdl.handle.net/1842/2008
Collections
  • CSTR publications

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page

 

 

All of ERACommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisorsThis CollectionBy Issue DateAuthorsTitlesSubjectsPublication TypeSponsorSupervisors
LoginRegister

Library & University Collections HomeUniversity of Edinburgh Information Services Home
Privacy & Cookies | Takedown Policy | Accessibility | Contact
Privacy & Cookies
Takedown Policy
Accessibility
Contact
feed RSS Feeds

RSS Feed not available for this page