Characterizing the Adversarial Vulnerability of Speech self-Supervised Learning
Refereed conference paper presented and published in conference proceedings

替代計量分析
.

其它資訊
摘要A leaderboard named Speech processing Universal PERformance Benchmark (SUPERB), which aims at benchmarking the performance of a shared self-supervised learning (SSL) speech model across various downstream speech tasks with minimal modification of architectures and a small amount of data, has fueled the research for speech representation learning. The SUPERB demonstrates speech SSL upstream models improve the performance of various downstream tasks through just minimal adaptation. As the paradigm of the self-supervised learning upstream model followed by downstream tasks arouses more attention in the speech community, characterizing the adversarial robustness of such paradigm is of high priority. In this paper, we make the first attempt to investigate the adversarial vulnerability of such paradigm under the attacks from both zero-knowledge adversaries and limited-knowledge adversaries. The experimental results illustrate that the paradigm proposed by SUPERB is seriously vulnerable to limited-knowledge adversaries, and the attacks generated by zero-knowledge adversaries are with transferability. The XAB test verifies the imperceptibility of crafted adversarial attacks.
著者Wu H., Zheng B., Li X., Wu X., Lee H.Y., Meng H.
會議名稱IEEE International Conference on Acoustics, Speech and Signal Processing
會議開始日07.05.2022
會議完結日13.05.2022
會議地點Singapore
會議國家/地區新加坡
會議論文集題名IEEE International Conference on Acoustics, Speech and Signal Processing
出版年份2022
出版社Institute of Electrical and Electronics Engineers Inc.
頁次3164 - 3168
國際標準書號9781665405409
語言英式英語
關鍵詞Adversarial attack, self-supervised learning

上次更新時間 2024-21-08 於 00:46