Show simple item record

dc.contributor.authorTakeda, Haruto
dc.contributor.authorSaito, Naoki
dc.contributor.authorOtsuki, Tomoshi
dc.contributor.authorNakai, Mitsuru
dc.contributor.authorShimodaira, Hiroshi
dc.contributor.authorSagayama, Shigeki
dc.coverage.spatial4en
dc.date.accessioned2006-05-10T17:34:28Z
dc.date.available2006-05-10T17:34:28Z
dc.date.issued2002-12
dc.identifier.citationIn Multimedia Signal Processing, 2002 IEEE Workshop on, 9-11 Dec. 2002 Page(s):428 - 431en
dc.identifier.urihttp://ieeexplore.ieee.org/servlet/opac?punumber=8561
dc.identifier.urihttp://hdl.handle.net/1842/961
dc.description.abstractThis paper describes a Hidden Markov Model (HMM)-based method of automatic transcription of MIDI (Musical Instrument Digital Interface) signals of performed music. The problem is formulated as recognition of a given sequence of fluctuating note durations to find the most likely intended note sequence utilizing the modern continuous speech recognition technique. Combining a stochastic model of deviating note durations and a stochastic grammar representing possible sequences of notes, the maximum likelihood estimate of the note sequence is searched in terms of Viterbi algorithm. The same principle is successfully applied to a joint problem of bar line allocation, time measure recognition, and tempo estimation. Finally, durations of consecutive n notes are combined to form a "rhythm vector" representing tempo-free relative durations of the notes and treated in the same framework. Significant improvements compared with conventional "quantization" techniques are shown.en
dc.format.extent321544 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherIEEEen
dc.titleHidden Markov Model for Automatic Transcription of MIDI Signalsen
dc.typeConference Paperen


Files in this item

This item appears in the following Collection(s)

Show simple item record