DENSE POINT CLOUD EXTRACTION FROM UAV IMAGERY USING PARALLAX ATTENTION
Keywords: UAV, point cloud, dense image matching, deep learning, parallax attention
Abstract. Unmanned Aerial Vehicles have shown to be one of the most disruptive technologies in over the last decades—having an impact on may different applications such as environmental monitoring, disaster management, land administration, and water management. The photogrammetric pipeline is the core building block that enables researchers and practictioners to deliver UAV-related solutions for these applications. Advances in deep learning show promising results that can help improve steps within this pipeline. This study specifically investigates the use of parallax attention mechanism for improving dense point cloud extraction from UAV imagery. We experimented with three different setups of applying this network and have compared it against a semi-global matching based method. The first setup directly applies a pretrained stereo matching network, the second finetunes the pretrained network on the UAV dataset, and the third retrains the network using disparity values derived from a reference DSM of lower resolution. Results show that there could be notable improvements on the accuracy of resulting extracted point cloud when using a parallax attention stereo matching network for the dense image matching step over the conventional semi-global matching method for the case of easier stereo pair with high overlap and lower occlusion. However, there seems to be unclear improvements when dealing with stereo pairs that are highly different compared to which the networks are originally trained on, e.g. longer-baselines resulting to lower overlap and more occlusions. Furthermore, retraining with a disparity values derived from a lower resolution DSM also does not improve the resulting point cloud.