Show simple item record

dc.contributor.authorWrigley, Stuart N
dc.contributor.authorBrown, Guy J
dc.contributor.authorWan, Vincent
dc.contributor.authorRenals, Steve
dc.date.accessioned2006-05-16T10:53:49Z
dc.date.available2006-05-16T10:53:49Z
dc.date.issued2003
dc.identifier.citationWrigley, Stuart N. / Brown, Guy J. / Wan, Vincent / Renals, Steve (2003): "Feature selection for the classification of crosstalk in multi-channel audio", In EUROSPEECH-2003, 469-472.en
dc.identifier.urihttp://www.isca-speech.org/archive/eurospeech_2003/index.html
dc.identifier.urihttp://hdl.handle.net/1842/1099
dc.description.abstractAn extension to the conventional speech / nonspeech classification framework is presented for a scenario in which a number of microphones record the activity of speakers present at a meeting (one microphone per speaker). Since each microphone can receive speech from both the participant wearing the microphone (local speech) and other participants (crosstalk), the recorded audio can be broadly classified in four ways: local speech, crosstalk plus local speech, crosstalk alone and silence. We describe a classifier in which a Gaussian mixture model (GMM) is used to model each class. A large set of potential acoustic features are considered, some of which have been employed in previous speech / nonspeech classifiers. A combination of two feature selection algorithms is used to identify the optimal feature set for each class. Results from the GMM classifier using the selected features are superior to those of a previously published approach.en
dc.format.extent170298 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherInternational Speech Communication Associationen
dc.subjectSpeech Recognitionen
dc.subjectcrosstalken
dc.subjectGaussian mixture modelen
dc.titleFeature selection for the classification of crosstalk in multi-channel audioen
dc.typeConference Paperen


Files in this item

This item appears in the following Collection(s)

Show simple item record