Efficient Convention Emergence through Decoupled Reinforcement Social Learning with Teacher-Student Mechanism
Refereed conference paper presented and published in conference proceedings


Full Text

Times Cited

Other information
AbstractIn this paper, we design reinforcement learning based (RL-based) strategies to promote convention emergence in multiagent systems (MASs) with large convention space. We apply our approaches to a language coordination problem in which agents need to coordinate on a dominant lexicon for efficient communication. By modeling each lexicon which maps each concept to a single word as a Markov strategy representation, the original single-state convention learning problem can be transformed into a multi-state multiagent coordination problem. The dynamics of lexicon evolutions during an interaction episode can be modeled as a Markov game, which allows agents to improve the action values of each concept separately and incrementally. Specifically we propose two learning strategies, multiple-Q and multiple-R, and also propose incorporating teacher-student mechanism on top of the learning strategies to accelerate lexicon convergence speed. Extensive experiments verify that our approaches outperform the state-of-the-art approaches in terms of convergence efficiency, convention quality and scalability.
All Author(s) ListYixi Wang, Wenhuan Lu, Jianye Hao, Jianguo Wei, Ho-Fung Leung
Name of Conference17th International Conference on Autonomous Agents and MultiAgent Systems
Start Date of Conference10/07/2018
End Date of Conference15/07/2018
Place of ConferenceStockholm
Country/Region of ConferenceSweden
Proceedings TitleProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018
Year2018
Volume Number2
Pages795 - 803
ISBN978-151086808-3
ISSN1548-8403
LanguagesEnglish-United States
KeywordsMultiagent social learning, Convention emergence

Last updated on 2020-13-08 at 04:31