Edinburgh Research Archive

Enhancing implicit discourse relation recognition by exploiting label inter-relations

Item Status

Embargo End Date

Authors

Long, Wanqiu

Abstract

Implicit Discourse Relation Recognition (IDRR) is a fundamental yet challenging task in discourse parsing, as it involves identifying rhetorical, semantic and/or pragmatic relationships between text spans in the absence of explicit connectives such as “because” or “however”. While recent advances leveraging pre-trained language models and prompt-based learning have improved performance, most existing approaches treat discourse sense labels as flat and independent categories. This neglects the rich structural information embedded in annotation frameworks like the Penn Discourse Treebank (PDTB), where discourse relations are organized hierarchically and can co-occur in some ways. The central claim of this thesis is that structured groupings of discourse senses — as encoded in the sense hierarchy — can serve as an effective structural prior to guide model training, particularly by shaping how label distances are represented and learned. This thesis proposes methods to enhance IDRR by focusing on two kinds of label inter-relations: the hierarchical relations and co-occurrence-based label interrelations. First, we introduce a contrastive learning framework that utilizes the PDTB sense hierarchy to guide the selection of semantically meaningful negative examples during training, thereby encouraging the model to learn finer-grained distinctions between closely related senses. Second, we integrate hierarchical information into a prompt-based learning paradigm through a prototype-based verbalizer, which aligns label representations with the sense hierarchy. This approach is further extended to support zero-shot cross-lingual IDRR, demonstrating effectiveness across both monolingual and cross-lingual scenarios. Third, we explore multi-label classification frameworks to handle cases where multiple discourse relations simultaneously hold between a single pair of text spans — an under-addressed yet prevalent phenomenon in real-world discourse. Incorporating hierarchical sense information also improves the accuracy of multi-label predictions. Extensive experiments demonstrate the effectiveness of our approaches that consider the label inter-relations. The results show that explicitly modeling label hierarchies improves model performance in both single-label classification and multi-label classification scenarios. This work advances our understanding of how structural relationships between discourse relations can be effectively utilized in computational models, while also highlighting the importance of handling multi-label cases in discourse relation recognition. Finally, this thesis outlines several promising directions for future work. One avenue is to extend the proposed approaches to broader datasets, including discourse annotations from alternative frameworks such as RST and eRST, as well as texts from diverse domains and languages, to better assess the generalizability of the methods. Another direction involves integrating argument span detection with discourse relation recognition into a unified framework, thereby advancing toward more realistic and end-to-end discourse parsing systems. Additionally, future work may explore guiding Large Language Models (LLMs) to better represent discourse relations and understand the label relationships between senses by leveraging the hierarchical organization of discourse senses. Together, these directions point to the broader goal of capturing the complexity of coherence more effectively in natural language.

This item appears in the following Collection(s)