Discovering place-informative scenes and objects using social media photos
Publication in refereed journal

香港中文大學研究人員
替代計量分析
.

其它資訊
摘要Understanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This work proposes a data-driven framework to explore the place-informative scenes and objects by employing deep convolutional neural network to learn and measure the visual knowledge of place appearance automatically from a massive dataset of photos and imagery. Based on the proposed framework, we compare the visual similarity and visual distinctiveness of 18 cities worldwide using millions of geo-tagged photos obtained from social media. As a result, we identify the visual cues of each city that distinguish that city from others: other than landmarks, a large number of historical architecture, religious sites, unique urban scenes, along with some unusual natural landscapes have been identified as the most place-informative elements. In terms of the city-informative objects, taking vehicles as an example, we find that the taxis, police cars and ambulances are the most place-informative objects. The results of this work are inspiring for various fields—providing insights on what large-scale geo-tagged data can achieve in understanding place formalization and urban design.
出版社接受日期07.02.2019
著者Fan Zhang, Bolei Zhou, Carlo Ratti, Yu Liu
期刊名稱Royal Society Open Science
出版年份2019
月份3
卷號6
期次3
出版社Royal Society, The: Open Access / Royal Society
文章號碼181375
國際標準期刊號2054-5703
語言美式英語
關鍵詞city similarity, city streetscape, deep learning, street-level imagery

上次更新時間 2021-19-09 於 00:03