Language of music: a computational model of music interpretation
View/ Open
Date
02/07/2018Author
McLeod, Andrew Philip
Metadata
Abstract
Automatic music transcription (AMT) is commonly defined as the process of converting
an acoustic musical signal into some form of musical notation, and can be split
into two separate phases: (1) multi-pitch detection, the conversion of an audio signal
into a time-frequency representation similar to a MIDI file; and (2) converting from
this time-frequency representation into a musical score. A substantial amount of AMT
research in recent years has concentrated on multi-pitch detection, and yet, in the case
of the transcription of polyphonic music, there has been little progress.
There are many potential reasons for this slow progress, but this thesis concentrates
on the (lack of) use of music language models during the transcription process. In particular,
a music language model would impart to a transcription system the background
knowledge of music theory upon which a human transcriber relies. In the related field
of automatic speech recognition, it has been shown that the use of a language model
drawn from the field of natural language processing (NLP) is an essential component
of a system for transcribing spoken word into text, and there is no reason to believe
that music should be any different.
This thesis will show that a music language model inspired by NLP techniques can
be used successfully for transcription. In fact, this thesis will create the blueprint for
such a music language model. We begin with a brief overview of existing multi-pitch
detection systems, in particular noting four key properties which any music language
model should have to be useful for integration into a joint system for AMT: it should
(1) be probabilistic, (2) not use any data a priori, (3) be able to run on live performance
data, and (4) be incremental.
We then investigate voice separation, creating a model which achieves state-of-the-art
performance on the task, and show that, used as a simple music language model, it
improves multi-pitch detection performance significantly. This is followed by an investigation
of metrical detection and alignment, where we introduce a grammar crafted for
the task which, combined with a beat-tracking model, achieves state-of-the-art results
on metrical alignment. This system’s success adds more evidence to the long-existing
hypothesis that music and language consist of extremely similar structures.
We end by investigating the joint analysis of music, in particular showing that a
combination of our two models running jointly outperforms each running independently.
We also introduce a new joint, automatic, quantitative metric for the complete
transcription of an audio recording into an annotated musical score, something which
the field currently lacks.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Facilitating musical learning in Scottish Primary Schools: an interview-based study of generalist primary teachers’, primary music specialists’ and community music practitioners’ views and experiences
Bhachu, Diljeet Kaur (The University of Edinburgh, 2019-11-27)Confidence in teaching music has been a long-standing issue for Scottish generalist primary teachers, and, amidst cuts to specialist teachers and instrumental music in Scottish schools, generalist teachers are increasingly ... -
Music in communication : improvisation in music therapy
Pavlicevic, Mercedes (The University of Edinburgh, 1991) -
The role of music in the politics and performing arts as evidenced in a crucial musical treatise of the Japanese medieval period, the Kyōkunshō 教訓抄
Kato, Yuri (The University of Edinburgh, 2018-07-07)Gagaku, ancient Japanese court music and dance, known today as a traditional performing art, has over a thousand years of history since its introduction from the East Asian mainland. Despite the fact that the study of ...