Autonomous Semantic Mapping for SLAM Systems
Keywords: Semantic Mapping, Segmentation, SLAM, Multimodal, Real-time
Abstract. Semantic mapping is crucial for intelligent obstacle avoidance and planning in SLAM systems. We proposed an autonomous semantic mapping approach that integrates multimodal semantic segmentation and SLAM techniques to construct a dense 3D semantic map in real time. Multimodal semantic segmentation based on camera images and LiDAR point clouds is performed in each frame, which assigns image segmentation labels to LiDAR points, generating per-frame 3D semantic information. These segmented frames are then incrementally fused within the SLAM framework to produce a globally consistent semantic map of the environment. The proposed approach is validated through real-world experiments conducted around the Star Lake Building at Wuhan University using the Luo-Jia Explorer system. The experimental results show that our method achieves real-time performance with an inference speed of up to 14Hz on an RTX 4070 GPU, effectively processing sensor data on 10Hz while maintaining high segmentation accuracy in both indoor and outdoor scenarios.
