A NEW FEATURE DESCRIPTOR FOR LIDAR IMAGE MATCHING
Keywords: LiDAR, Computer Vision, Point Matching, Feature Descriptor
Abstract. LIght Detection And Ranging (LiDAR) data is available in a point cloud corresponding to long overlapping strips on the ground. The percentage of overlap in these LiDAR strips varies between 10 and 30. The strips are unregistered with respect to each other. Any further interpretation or study of the whole area requires registration of these strips with respect to each other. This process is called strip-adjustment. Traditionally, LiDAR point clouds are matched and strip-adjusted using techniques based on iterative closest point (ICP) or modifications of the same, which as the name suggests, runs over multiple iterations. Iterative algorithms, however, are time consuming and this paper offers a solution to the problem. In this paper, point correspondences are found on overlapping strips so that they can be registered with each other. We introduce a new method for point matching on LiDAR data. Our algorithm combines the power of LiDAR elevation data with a keypoint detector and descriptor that is based on the Scale Invariant Feature Transform (SIFT) method. The keypoint detector finds interesting keypoints in the LiDAR intensity image from the SIFT keypoints; a unique signature of each keypoint is then obtained by examining a patch surrounding that point in the elevation image. Histograms of subdivisions of the patch are set as the keypoint descriptor. Once all the keypoints and descriptors are obtained for two overlapping patches, correspondences are found using the nearest neighbor distance ratio method.