Realtime Image Matching for Vision Based Car Navigation with Built-in Sensory Data
Keywords: Position, Attitude, Navigation, Image, Velocity, Angular rate, Front Camera, Built-in Sensors
Abstract. Recently, a car employs various built-in sensors such as a speedometer, odometer, accelerometer and angular rate sensor for safety and maintenance. These sensory data can be provided in real time through a CAN (Controller Area Network) bus. In addition, image sequences can be provided from various cameras mounted to the car, such as built-in front and around view monitoring cameras. We thus propose an image based car navigation framework to determine car position and attitude using the built-in sensory data such as a speed, angular rate and images from a front view camera. First, we determine the two-dimensional position and attitude of a car using the velocity and angular rate provided in real-time through the CAN bus. We then estimate the three-dimensional position and attitude by conducting sequential bundle block adjustment using the two-dimensional position and attitude and tie points between image sequences. The sequential bundle adjustment can produce accurate results comparable to those from the conventional simultaneous bundle adjustment in real time. As the input to this process, it needs reliable tie points between adjacent images acquired from a real-time image matching process. Hence, we develop an image matching process based on the enhanced KLT algorithm using preliminary exterior orientation parameters. We also construct a test system that can acquire and store built-in sensory data and front camera images at the same time, and conduct experiments with the real data acquired by the system. The experimental results show that the proposed image matching process can generate accurate tie-points with about 0.2 second in average at each epoch. It can successfully meet the requirements from real-time bundle adjustment for image based car navigation.