SAC-Net: Spatial Attenuation Context for Salient Object Detection
Publication in refereed journal

Times Cited
Altmetrics Information

Other information
AbstractThis paper presents a new deep neural network design for salient object detection by maximizing the integration of local and global image context within, around, and beyond the salient objects. Our key idea is to adaptively propagate and aggregate the image context features with variable attenuation over the entire feature maps. To achieve this, we design the spatial attenuation context (SAC) module to recurrently translate and aggregate the context features independently with different attenuation factors and then to attentively learn the weights to adaptively integrate the aggregated context features. By further embedding the module to process individual layers in a deep network, namely SAC-Net, we can train the network end-to-end and optimize the context features for detecting salient objects. Compared with 29 state-of-the-art methods, experimental results show that our method performs favorably over all the others on six common benchmark data, both quantitatively and visually.
Acceptance Date11/05/2020
All Author(s) ListXiaowei Hu, Chi-Wing Fu, Lei Zhu, Tianyu Wang, Pheng-Ann Heng
Journal nameIEEE Transactions on Circuits and Systems for Video Technology
Volume Number31
Issue Number3
Pages1079 - 1090
LanguagesEnglish-United States

Last updated on 2022-10-01 at 00:07