A comparative study of RPCL and MCE based discriminative training methods for LVCSR
Refereed conference paper presented and published in conference proceedings

香港中文大學研究人員
替代計量分析
.

其它資訊
摘要This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focus on avoiding misclassification through enforcing the learning of correct class and de-learning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at state level. The LVCSR results show that both MCE and RPCL perform better than Maximum Likelihood Estimation (MLE), while RPCL has better discriminative and generative abilities than MCE. © 2012 Springer-Verlag.
著者Pang Z., Wu X., Xu L.
會議名稱2nd Sino-Foreign-Interchange Workshop on Intelligent Science and Intelligent Data Engineering, IScIDE 2011
會議開始日23.10.2011
會議完結日25.10.2011
會議地點Xi'an
會議國家/地區中國
詳細描述ed. by Yanning Zhang, Zhi-Hua Zhou, Changshui Zhang, Ying Li.
出版年份2012
月份9
日期11
卷號7202 LNCS
出版社Springer Verlag
出版地Germany
頁次27 - 34
國際標準書號9783642319181
國際標準期刊號0302-9743
語言英式英語
關鍵詞discriminative training, large vocabulary continuous speech recognition, minimum classification error, Rival penalized competitive learning

上次更新時間 2021-18-09 於 23:41