ON THE BENEFIT OF CONCURRENT ADJUSTMENT OF ACTIVE AND PASSIVE OPTICAL SENSORS WITH GNSS & RAW INERTIAL DATA
Keywords: Lidar, Geo-referencing, Photogrammetry, Sensor-fusion, UAVs
Abstract. In airborne laser scanning a high-frequency trajectory solution is typically determined from inertial sensors and employed to directly geo-reference the acquired laser points. When low-cost MEMS inertial sensors are used, such as in lightweight unmanned aerial vehicles, non-negligible errors in the estimated trajectory project to the final point-cloud, resulting in unsatisfactory accuracy on the ground. There are different multi-sensor fusion approaches to correct the point-cloud errors caused by an imperfect trajectory determination. Mismatches between different optical observations and/or in the overlapping regions of the point-cloud can allow the correction of the final point-cloud, either directly, by means of rigid transformations, or indirectly, via improving the scanner trajectory estimation. In this work we propose to fuse lidar and cameras in a single adjustment based on dynamic networks, considering 2D tie-points from the imagery and 3D tie-points from overlapping point-cloud sections. On a challenging corridor mapping scenario, we show that considering either 2D or 3D tie-points, along with inertial and GNSS observations, results in a remarkably accurate point-cloud, even when low-cost inertial sensors are employed and in presence of challenging surface textures, such as high vegetation. Furthermore, since the distribution of the 2D and 3D tie-points is complementary, considering them together further increases the robustness of the adjustment due to higher redundancy. By employing the proposed approach within this controlled example, we were able to improve the final point-cloud accuracy by more than three times with respect to conventional geo-referencing methodology and to reduce the magnitude of the errors.