Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards
Refereed conference paper presented and published in conference proceedings


全文

引用次數

其它資訊
摘要Generating keyphrases that summarize the main points of a document is a fundamental task in natural language processing. Although existing generative models are capable of predicting multiple keyphrases for an input document as well as determining the number of keyphrases to generate, they still suffer from the problem of generating too few keyphrases. To address this problem, we propose a reinforcement learning (RL) approach for keyphrase generation, with an adaptive reward function that encourages a model to generate both sufficient and accurate keyphrases. Furthermore, we introduce a new evaluation method that incorporates name variations of the ground-truth keyphrases using the Wikipedia knowledge base. Thus, our evaluation method can more robustly evaluate the quality of predicted keyphrases. Extensive experiments on five real-world datasets of different scales demonstrate that our RL approach consistently and significantly improves the performance of the state-of-the-art generative models with both conventional and new evaluation methods.
著者Hou Pong Chan, Wang Chen, Lu Wang, Irwin King
會議名稱57th Annual Meeting of the Association-for-Computational-Linguistics (ACL)
會議開始日28.07.2019
會議完結日02.08.2019
會議地點Florence
會議國家/地區意大利
會議論文集題名57th Annual Meeting of the Association-for-Computational-Linguistics (ACL 2019)
書名Proceedings of the 57th Conference of the Association for Computational Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers
出版年份2019
頁次2163 - 2174
國際標準書號978-1-950737-48-2
語言美式英語

上次更新時間 2021-27-07 於 01:00