Deep binocular tone mapping
Publication in refereed journal
Officially Accepted for Publication


Times Cited
Altmetrics Information
.

Other information
AbstractBinocular tone mapping is studied in the previous works to generate a fusible pair of LDR images in order to convey more visual content than one single LDR image. However, the existing methods are all based on monocular tone mapping operators. It greatly restricts the preservation of local details and global contrast in a binocular LDR pair. In this paper, we proposed the first binocular tone mapping operator to more effectively distribute visual content to an LDR pair, leveraging the great representability and interpretability of deep convolutional neural network. Based on the existing binocular perception models, novel loss functions are also proposed to optimize the output pairs in terms of local details, global contrast, content distribution, and binocular fusibility. Our method is validated with a qualitative and quantitative evaluation, as well as a user study. Statistics show that our method outperforms the state-of-the-art binocular tone mapping frameworks in terms of both visual quality and time performance.
Acceptance Date14/05/2019
All Author(s) ListZhuming Zhang, Chu Han, Shengfeng He, Xueting Liu, Haichao Zhu, Xinghong Hu, Tien-Tsin Wong
Journal nameVisual Computer
Year2019
PublisherSpringer
ISSN0178-2789
eISSN1432-2315
LanguagesEnglish-United States
KeywordsDeep learning, binocular tone mapping, computational perception, computer graphics

Last updated on 2020-03-06 at 02:08