EXPLAINING AI: UNDERSTANDING DEEP LEARNING MODELS FOR HERITAGE POINT CLOUDS
Keywords: explainability, semantic segmentation, cultural heritage, point clouds
Abstract. Deep Learning has been pivotal in many real-world applications (e.g., autonomous driving, medicine and retail). With the wide availability of consumer-grade depth sensors, acquiring 3D data has become more affordable and effective, and many 3D datasets are currently publicly available. 3D data provides a great opportunity for a better comprehension of the surrounding environment for machines. There is a growing need for innovative methods for the treatment and analysis of point clouds and for their classification. The complex hidden layers, which are at the basis of deep neural networks (DNNs), make it difficult to interpret these models, that up to a few years ago DNNs were considered and treated as black box operators. Still, with their increasing popularity, making them explainable and interpretable has become mandatory. A lot of efforts were devoted to developing an Explainable Artificial Intelligence (XAI) framework for explaining DNNs decisions with 2D data, while only a few studies have attempted to investigate the explainability of 3D DNNs and, even more, heritage scenarios. To overcome these limitations, it was proposed the BubblEX framework: a novel multimodal fusion framework to learn the 3D point features. In our work, BubblEX has been exploited to understand the decisions taken by DNNs for heritage point clouds. The approach has been applied to a Digital Cultural Heritage Dataset, which is publicly available: the ArCH (Architectural Cultural Heritage) Dataset.