Combining Multiple Knowledge Sources for Dialogue Segmentation in Multimedia Archives
dc.contributor.author
Hsueh, Pei-Yun
en
dc.contributor.author
Moore, Johanna D.
en
dc.date.accessioned
2010-11-03T11:24:53Z
dc.date.available
2010-11-03T11:24:53Z
dc.date.closingDate
2007-06-27
dc.date.issued
2010-11-03T11:23:34Z
dc.date.openingDate
2007-06-25
dc.date.updated
2010-11-03T11:24:53Z
dc.description.abstract
Automatic segmentation is important for
making multimedia archives comprehensible,
and for developing downstream information
retrieval and extraction modules. In
this study, we explore approaches that can
segment multiparty conversational speech
by integrating various knowledge sources
(e.g., words, audio and video recordings,
speaker intention and context). In particular,
we evaluate the performance of a Maximum
Entropy approach, and examine the
effectiveness of multimodal features on the
task of dialogue segmentation. We also provide
a quantitative account of the effect of
using ASR transcription as opposed to human
transcripts.
en
dc.extent.noOfPages
8
en
dc.identifier.uri
http://www.aclweb.org/anthology-new/P/P07/P07-1128.pdf
dc.identifier.uri
http://hdl.handle.net/1842/4169
dc.language.iso
en
dc.title
Combining Multiple Knowledge Sources for Dialogue Segmentation in Multimedia Archives
en
dc.type
Conference Paper
en
rps.title
Proceedings of 45th Annual Meeting of the Association for Computational Linguistics
en
Files
Original bundle
1 - 1 of 1
- Name:
- MooreJ_Combining Multiple Knowledge Sources.pdf
- Size:
- 86.44 KB
- Format:
- Adobe Portable Document Format
This item appears in the following Collection(s)

