Learning to Predict Layout-to-image Conditional Convolutions for Semantic Image Synthesis
Refereed conference paper presented and published in conference proceedings


全文

引用次數

其它資訊
摘要Semantic image synthesis aims at generating photorealistic images from semantic layouts. Previous approaches with conditional generative adversarial networks (GAN) show state-of-the-art performance on this task, which either feed the semantic label maps as inputs to the generator, or use them to modulate the activations in normalization layers via affine transformations. We argue that convolutional kernels in the generator should be aware of the distinct semantic labels at different locations when generating images. In order to better exploit the semantic layout for the image generator, we propose to predict convolutional kernels conditioned on the semantic label map to generate the intermediate feature maps from the noise maps and eventually generate the images. Moreover, we propose a feature pyramid semantics-embedding discriminator, which is more effective in enhancing fine details and semantic alignments between the generated images and the input semantic layouts than previous multi-scale discriminators. We achieve state-of-the-art results on both quantitative metrics and subjective evaluation on various semantic segmentation datasets, demonstrating the effectiveness of our approach.
著者Xihui Liu, Guojun Yin, Jing Shao, Xiaogang Wang, Hongsheng Li
會議名稱33rd Conference on Neural Information Processing Systems (NeurIPS)
會議開始日04.12.2019
會議完結日08.12.2019
會議地點Vancouver
會議國家/地區加拿大
會議論文集題名ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
出版年份2019
卷號32
國際標準期刊號1049-5258
語言美式英語

上次更新時間 2021-26-10 於 00:03