Show simple item record

IEEE Transactions on Audio, Speech and Language Processing

dc.contributor.authorHuang, Songfangen
dc.contributor.authorRenals, Steveen
dc.date.accessioned2010-12-15T13:57:33Z
dc.date.available2010-12-15T13:57:33Z
dc.date.issued2010
dc.identifier.urihttp://dx.doi.org/10.1109/TASL.2010.2040782
dc.identifier.urihttp://hdl.handle.net/1842/4528
dc.description.abstractTraditional n-gram language models are widely used in state-of-the-art large vocabulary speech recognition systems. This simple model suffers from some limitations, such as overfitting of maximum-likelihood estimation and the lack of rich contextual knowledge sources. In this paper, we exploit a hierarchical Bayesian interpretation for language modeling, based on a nonparametric prior called the Pitman--Yor process. This offers a principled approach to language model smoothing, embedding the power-law distribution for natural language. Experiments on the recognition of conversational speech in multiparty meetings demonstrate that by using hierarchical Bayesian language models, we are able to achieve significant reductions in perplexity and word error rate.en
dc.publisherIEEEen
dc.titleHierarchical Bayesian Language Models for Conversational Speech Recognitionen
dc.typeArticleen
dc.identifier.doi10.1109/TASL.2010.2040782
rps.issue8en
rps.volume18en
rps.titleIEEE Transactions on Audio, Speech and Language Processingen
dc.extent.pageNumbers1941--1954en
dc.date.updated2010-12-15T13:57:33Z


Files in this item

This item appears in the following Collection(s)

Show simple item record