Show simple item record

dc.contributor.advisorRenals, Stephen
dc.contributor.advisorWolters, Maria
dc.contributor.authorIsaac, Karl Bruce
dc.date.accessioned2016-06-15T08:50:37Z
dc.date.available2016-06-15T08:50:37Z
dc.date.issued2015-06-29
dc.identifier.urihttp://hdl.handle.net/1842/15870
dc.description.abstractSynthetic speech is a valuable means of output, in a range of application contexts, for people with visual, cognitive, or other impairments or for situations were other means are not practicable. Noise and reverberation occur in many of these application contexts and are known to have devastating effects on the intelligibility of natural speech, yet very little was known about the effects on synthetic speech based on unit selection or hidden Markov models. In this thesis, we put forward an approach for assessing the intelligibility of synthetic and natural speech in noise, reverberation, or a combination of the two. The approach uses an experimental methodology consisting of Amazon Mechanical Turk, Matrix sentences, and noises that approximate the real-world, evaluated with generalized linear mixed models. The experimental methodologies were assessed against their traditional counterparts and were found to provide a number of additional benefits, whilst maintaining equivalent measures of relative performance. Subsequent experiments were carried out to establish the efficacy of the approach in measuring intelligibility in noise and then reverberation. Finally, the approach was applied to natural speech and the two synthetic speech systems in combinations of noise and reverberation. We have examine and report on the intelligibility of current synthesis systems in real-life noises and reverberation using techniques that bridge the gap between the audiology and speech synthesis communities and using Amazon Mechanical Turk. In the process, we establish Amazon Mechanical Turk and Matrix sentences as valuable tools in the assessment of synthetic speech intelligibility.en
dc.contributor.sponsorEngineering and Physical Sciences Research Council (EPSRC)en
dc.language.isoenen
dc.publisherThe University of Edinburghen
dc.relation.hasversionWolters, M. K., Isaac, K. B., and Doherty, J. M. (2012). Hold that thought: Are spearcons less disruptive than spoken reminders? In CHI ’12 Extended Abstracts on Human Factors in Computing Systems, pages 1745–1750en
dc.relation.hasversionWolters, M. K., Isaac, K. B., and Renals, S. (2010). Evaluating speech synthesis intelligibility using Amazon Mechanical Turk. In Proceedings of 7th Speech Synthesis Workshop (SSW7), pages 136–141.en
dc.relation.hasversionWolters, M. K., Johnson, C., and Isaac, K. B. (2011). Can the Hearing Handicap Inventory for Adults be used as a screen for perception experiments? In ICPhS XVII 2011, Hong Kong.en
dc.subjectintelligibilityen
dc.subjectsynthetic speechen
dc.subjectnoiseen
dc.subjectreverberationen
dc.titleIntelligibility of synthetic speech in noise and reverberationen
dc.typeThesis or Dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhD Doctor of Philosophyen


Files in this item

This item appears in the following Collection(s)

Show simple item record