Deep Flow-Guided Video Inpainting
Refereed conference paper presented and published in conference proceedings


Full Text

Other information
AbstractVideo inpainting, which aims at filling in missing regions of a video, remains challenging due to the difficulty of preserving the precise spatial and temporal coherence of video contents. In this work we propose a novel flow-guided video inpainting approach. Rather than filling in the RGB pixels of each frame directly, we consider video inpainting as a pixel propagation problem. We first synthesize a spatially and temporally coherent optical flow field across video frames using a newly designed Deep Flow Completion network. Then the synthesized flow field is used to guide the propagation of pixels to fill up the missing regions in the video. Specifically, the Deep Flow Completion network follows a coarse-to-fine refinement to complete the flow fields, while their quality is further improved by hard flow example mining. Following the guide of the completed flow, the missing video regions can be filled up precisely. Our method is evaluated on DAVIS and YouTube-VOS datasets qualitatively and quantitatively, achieving the state-of-the-art performance in terms of inpainting quality and speed. Codes and models are available at https://github.com/nbei/Deep-Flow-Guided-Video-Inpainting
Acceptance Date24/02/2019
All Author(s) ListRui Xu, Xiaoxiao Li, Bolei Zhou, Chen Change Loy
Name of ConferenceComputer Vision and Pattern Recognition (CVPR)
Start Date of Conference16/06/2019
End Date of Conference20/06/2019
Place of ConferenceLong Beach
Country/Region of ConferenceUnited States of America
Proceedings TitleProceedings IEEE Conference on Computer Vision and Pattern Recognition
Year2019
LanguagesEnglish-United States

Last updated on 2019-17-10 at 15:25