Treffer: The public art design of urban space using deep learning under the Internet of Things.
Weitere Informationen
The objective of this study is to investigate novel methods for designing public art in urban environments, utilizing the capabilities of Internet of Things (IoT) technology and advanced deep learning algorithms. Deep learning-based object detection techniques are employed to detect images of artworks or buildings in urban spaces, utilizing an improved version of the U-shaped Network (U-Net) image segmentation network. The Fast Region-Convolution Neural Network model is utilized as the underlying framework for training the urban space public art object detection network. The results demonstrate that the U-Net model achieves an image segmentation accuracy rate of 97.3%, while the traditional model achieves an accuracy rate of 87.5%. Both U-Net and traditional models show significant improvements in image segmentation accuracy as the overlap threshold increases. The U-Net model achieves a highest success rate of 97% for image segmentation, while the traditional image segmentation achieves a highest success rate of 92.7%. This study underscores the significance of reducing variation within a defined threshold to improve the accuracy of segmenting public art images in urban settings. The use of object detection technology is shown to be efficient in identifying images pertinent to the design of urban public art. Additionally, the application of Internet of Things (IoT) technology facilitates the extensive collection of diverse public art images, landmarks, and associated data, highlighting the necessity for continued dialogue on the topic of public art. [ABSTRACT FROM AUTHOR]
Copyright of Journal of Computational Methods in Sciences & Engineering is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)