Show simple item record

dc.contributor.authorRichmond, Korin
dc.date.accessioned2006-05-26T10:46:33Z
dc.date.available2006-05-26T10:46:33Z
dc.date.issued1999
dc.identifier.citationIn Proc. Eurospeech, volume 1, pages 149-152, Budapest, Hungary, 1999.en
dc.identifier.urihttp://www.isca-speech.org/archive/eurospeech_1999/index.html
dc.identifier.urihttp://hdl.handle.net/1842/1177
dc.description.abstractThis paper reports on present work, in which a recurrent neural network is trained to estimate `velum height' during continuous speech. Parallel acoustic-articulatory data comprising more than 400 read TIMIT sentences is obtained using electromagnetic articulography (EMA). This data is processed and used as training data for a range of neural network sizes. The network demonstrating the highest accuracy is identified. This performance is then evaluated in detail by analysing the network's output for each phonetic segment contained in 50 hand-labelled utterances set aside for testing purposes.en
dc.format.extent200103 bytes
dc.format.extent56797 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherInternational Speech Communication Associationen
dc.titleEstimating velum height from acoustics during continuous speech.en
dc.typeConference Paperen


Files in this item

This item appears in the following Collection(s)

Show simple item record