Video Co-saliency Guided Co-segmentation
Publication in refereed journal


引用次數
替代計量分析
.

其它資訊
摘要We introduce the term video co-saliency to denote the task of extracting the common noticeable, or salient, regions from multiple relevant videos. The proposed video cosaliency approach accounts for both inter-video foreground correspondences and intra-video saliency stimuli to emphasize the salient foreground regions of video frames and, at the same time, disregard irrelevant visual information of the background. Compared to image co-saliency, it is more reliable due to the utilization of temporal information of video sequence. Benefiting from the discriminability of video co-saliency, we present a unified framework for segmenting out common salient regions of relevant videos, guided by video co-saliency prior. Unlike naive video co-segmentation approaches employing simple color differences and local motion features, the presented video cosaliency provides a more powerful indicator for the common salient regions, thus conducting video co-segmentation efficiently. Extensive experiments show that the proposed method successfully infers video co-saliency and extracts the common salient regions, outperforming the state-of-the-art methods.
著者Wenguan Wang, Jianbing Shen, Hanqiu Sun, Ling Shao
期刊名稱IEEE Transactions on Circuits and Systems for Video Technology
出版年份2017
月份5
卷號PP
期次99
出版社IEEE
國際標準期刊號1051-8215
電子國際標準期刊號1558-2205
語言美式英語
關鍵詞Video co-saliency, video co-segmentation

上次更新時間 2020-17-10 於 03:05