LEVERAGING DYNAMIC OBJECTS FOR RELATIVE LOCALIZATION CORRECTION IN A CONNECTED AUTONOMOUS VEHICLE NETWORK
Keywords: Localization, Sensor Network, Sensor Fusion, Collective Perception, Point Cloud, Registration
Abstract. High-accurate localization is crucial for the safety and reliability of autonomous driving, especially for the information fusion of collective perception that aims to further improve road safety by sharing information in a communication network of Connected Autonomous Vehicles (CAV). In this scenario, small localization errors can impose additional difficulty on fusing the information from different CAVs. In this paper, we propose a RANSAC-based (RANdom SAmple Consensus) method to correct the relative localization errors between two CAVs in order to ease the information fusion among the CAVs. Different from previous LiDARbased localization algorithms that only take the static environmental information into consideration, this method also leverages the dynamic objects for localization thanks to the real-time data sharing between CAVs. Specifically, in addition to the static objects like poles, fences, and facades, the object centers of the detected dynamic vehicles are also used as keypoints for the matching of two point sets. The experiments on the synthetic dataset COMAP show that the proposed method can greatly decrease the relative localization error between two CAVs to less than 20cm as far as there are enough vehicles and poles are correctly detected by both CAVs. Besides, our proposed method is also highly efficient in runtime and can be used in real-time scenarios of autonomous driving.