Salience Estimation via Variational Auto-Encoders for Multi-Document Summarization
Refereed conference paper presented and published in conference proceedings


Full Text

Other information
AbstractWe propose a new unsupervised sentence salience framework for Multi-Document Summarization (MDS), which can be divided into two components: latent semantic modeling and salience estimation. For latent semantic modeling, a neural generative model called Variational Auto-Encoders (VAEs) is employed to describe the observed sentences and the corresponding latent semantic representations. Neural variational inference is used for the posterior inference of the latent variables.

For salience estimation, we propose an unsupervised data reconstruction framework, which jointly considers the reconstruction for latent semantic space and observed term vector space. Therefore, we can capture the salience of sentences from these two different and complementary vector spaces.

Thereafter, the VAEs-based latent semantic model is integrated into the sentence salience estimation component in a unified fashion, and the whole framework can be trained jointly by back-propagation via multi-task learning.
Experimental results on the benchmark datasets DUC and TAC show that our framework achieves better performance than the state-of-the-art models.
All Author(s) ListPiji Li, Zihao Wang, Wai Lam, Zhaochun Ren, Lidong Bing
Name of ConferenceThe Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Start Date of Conference04/02/2017
End Date of Conference09/02/2017
Place of ConferenceSan Francisco, California
Country/Region of ConferenceUnited States of America
Proceedings TitleProceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
Year2017
Pages3497 - 3503
LanguagesEnglish-United States

Last updated on 2018-22-01 at 12:48