Show simple item record

Proc. 7th Speech Synthesis Workshop (SSW7)

dc.contributor.authorWolters, Maria K.
dc.contributor.authorIsaac, Karl B.
dc.contributor.authorRenals, Steve
dc.date.accessioned2011-01-19T12:27:05Z
dc.date.available2011-01-19T12:27:05Z
dc.date.issued2010en
dc.identifier.urihttp://hdl.handle.net/1842/4660
dc.description.abstractMicrotask platforms such as Amazon Mechanical Turk (AMT) are increasingly used to create speech and language resources. AMT in particular allows researchers to quickly recruit a large number of fairly demographically diverse participants. In this study, we investigated whether AMT can be used for comparing the intelligibility of speech synthesis systems. We conducted two experiments in the lab and via AMT, one comparing US English diphone to US English speaker-adaptive HTS synthesis and one comparing UK English unit selection to UK English speaker-dependent HTS synthesis. While AMT word error rates were worse than lab error rates, AMT results were more sensitive to relative differences between systems. This is mainly due to the larger number of listeners. Boxplots and multilevel modelling allowed us to identify listeners who performed particularly badly, while thresholding was sufficient to eliminate rogue workers. We conclude that AMT is a viable platform for synthetic speech intelligibility comparisons.en
dc.titleEvaluating speech synthesis intelligibility using Amazon Mechanical Turken
dc.typeConference Paperen
rps.titleProc. 7th Speech Synthesis Workshop (SSW7)en
dc.date.updated2011-01-19T12:27:05Z


Files in this item

This item appears in the following Collection(s)

Show simple item record