PLL-VO: An Efficient and Robust Visual Odometry Integrating Point-Line Features and Neural Networks
Keywords: Visual Odometry, Line Detection, Neural Networks, Self-Supervised Learning, Lighting Conditions
Abstract. Visual odometry is crucial for the navigation and planning of autonomous robots, but low-light conditions, dramatic lighting changes, and low-texture scenes pose significant challenges to odometry estimation. This paper proposes PLL-VO, which integrates point-line features and deep learning. To overcome the impact of complex lighting conditions, a self-supervised learning method for interest point detection and a line detection algorithm that combines line optical flow tracking with cross-constraints is presented. After selecting keyframes based on point feature counts and line feature overlap angles, we integrate convolutional neural networks (CNNs) and graph neural networks (GNNs) to enhance sparse matching, thereby improving both accuracy and computational efficiency. PLL-VO system are evaluated in multiple datasets under various lighting conditions, demonstrating a 6.3% reduction in absolute trajectory error for pose estimation compared to state-of-the-art (SOTA) algorithms, the average computation time for visual odometry (43 ms) shows a 29.74% decrease compared to the state-of-the-art (SOTA) algorithms.