Opinion summarization of multiple reviews: data synthesis and modeling
Amplayo, Reinald Kim
The proliferation of online reviews has accelerated research on opinion mining, where the ultimate goal is to glean information from reviews which help users make decisions more efficiently. While opinion mining has assumed several facets in the literature (e.g., sentiment analysis, aspect extraction, etc.), opinion summarization, or the task of automatically creating a textual summary of opinions found in multiple reviews, aims to help users access content and improves their decision making. This thesis focuses on different methods to generate opinion summaries given multiple reviews about a target entity (e.g., a product or service). The task is challenging due to the absence of large-scale datasets for supervised training, which is paramount to the recent success of neural-based systems. In this thesis, we propose several methods to synthesize these datasets, thereby making supervised training for opinion summarization feasible. Firstly, we introduce a two-step process that creates synthetic datasets for opinion summarization. Given a corpus of reviews, we first sample a review and pretend it is a (pseudo-)summary. Then, we procure a list of reviews to pair with the summary. We obtain these reviews by generating noisy versions of the summary. We propose a summarization model which learns to denoise the input reviews and generate the summary, motivated by how humans write opinion summaries by removing divergent opinions from reviews. Extensive evaluation shows that our model brings substantial improvements over unsupervised abstractive and extractive baselines. To further reflect the diversity of opinions in naturally-occurring reviews, we incorporate content planning during synthetic dataset creation. For each pseudo-summary sampled from the corpus, we automatically induce its content plan in the form of aspect and sentiment distributions. We then sample reviews from the corpus using Dirichlet distributions parameterized by the content plan, and controlling the variance accordingly. Experimental results show that our approach outperforms competitive models in generating opinion summaries that capture opinion consensus. In opinion summarization, the notion of salience in reviews largely depends on user interest, therefore a generic summary may not satisfy the needs of all users, limiting their ability to make decisions. Therefore, we extend opinion summarization to generating aspect-controllable summaries. Using a synthetic training dataset enriched with aspect controllers of different granularity, we fine-tune a pre-trained language model which allows the creation of generic and aspect-specific summaries by modifying aspect controllers during inference. Experiments show that our model achieves state of the art and is able to generate personalized summaries.