ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-3-2020
https://doi.org/10.5194/isprs-annals-V-3-2020-227-2020
https://doi.org/10.5194/isprs-annals-V-3-2020-227-2020
03 Aug 2020
 | 03 Aug 2020

ENHANCEMENT OF DEPTH MAP BY FUSION USING ADAPTIVE AND SEMANTIC-GUIDED SPATIOTEMPORAL FILTERING

H. Albanwan and R. Qin

Keywords: Multi-depth Fusion, Digital Surface Model (DSM), Adaptive Spatiotemporal Fusion, Multi-view Stereo (MVS)

Abstract. Extracting detailed geometric information about a scene relies on the quality of the depth maps (e.g. Digital Elevation Surfaces, DSM) to enhance the performance of 3D model reconstruction. Elevation information from LiDAR is often expensive and hard to obtain. The most common approach to generate depth maps is through multi-view stereo (MVS) methods (e.g. dense stereo image matching). The quality of single depth maps, however, is often prone to noise, outliers, and missing data points due to the quality of the acquired image pairs. A reference multi-view image pair must be noise-free and clear to ensure high-quality depth maps. To avoid such a problem, current researches are headed toward fusing multiple depth maps to recover the shortcomings of single-depth maps resulted from a single pair of multi-view images. Several approaches tackled this problem by merging and fusing depth maps, using probabilistic and deterministic methods, but few discussed how these fused depth maps can be refined through adaptive spatiotemporal analysis algorithms (e.g. spatiotemporal filters). The motivation is to push towards preserving the high precision and detail level of depth maps while optimizing the performance, robustness, and efficiency of the algorithm.