Audio-Visual Recognition of Overlapped Speech for the LRS2 Dataset
Refereed conference paper presented and published in conference proceedings

替代計量分析
.

其它資訊
摘要Automatic recognition of overlapped speech remains a highly challenging task to date. Motivated by the bimodal nature of human speech perception, this paper investigates the use of audio-visual technologies for overlapped speech recognition. Three issues associated with the construction of audio-visual speech recognition (AVSR) systems are addressed. First, the basic architecture designs i.e. end-to-end and hybrid of AVSR systems are investigated. Second, purposefully designed modality fusion gates are used to robustly integrate the audio and visual features. Third, in contrast to a traditional pipelined architecture containing explicit speech separation and recognition components, a streamlined and integrated AVSR system optimized consistently using the lattice-free MMI (LF-MMI) discriminative criterion is also proposed. The proposed LF-MMI time-delay neural network (TDNN) system establishes the state-of-the-art for the LRS2 dataset. Experiments on overlapped speech simulated from the LRS2 dataset suggest the proposed AVSR system outperformed the audio only baseline LF-MMI DNN system by up to 29.98% absolute in word error rate (WER) reduction, and produced recognition performance comparable to a more complex pipelined system. Consistent performance improvements of 4.89% absolute in WER reduction over the baseline AVSR system using feature fusion are also obtained.
著者Jianwei Yu, Shixiong Zhang, Jian Wu, Shahram Ghorbani, Bo Wu, Shiyin Kang, Shansong Liu, Xunying Liu, Helen Meng, Dong Yu
會議名稱IEEE ICASSP2020
會議開始日04.05.2020
會議完結日08.05.2020
會議地點Barcelona
會議國家/地區西班牙
會議論文集題名ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
出版年份2020
出版社IEEE
頁次6984 - 6988
國際標準期刊號1520-6149
語言美式英語

上次更新時間 2021-09-05 於 00:12