PIXEL-RESOLUTION DTM GENERATION FOR THE LUNAR SURFACE BASED ON A COMBINED DEEP LEARNING AND SHAPE-FROM-SHADING (SFS) APPROACH
Keywords: Pixel-resolution DTM, Lunar Surface, Deep Learning, Shape from Shading, Convolution Neural Network
Abstract. High-resolution Digital Terrain Models (DTMs) of the lunar surface can provide crucial spatial information for lunar exploration missions. In this paper, we propose a method to generate high-quality DTMs based on a synthesis of deep learning and Shape from Shading (SFS) with a Lunar Reconnaissance Orbiter Narrow Angle Camera (LROC NAC) image as well as a coarse-resolution DTM as input. Specifically, we use a Convolutional Neural Network (CNN)-based deep learning architecture to predict initial pixel-resolution DTMs. Then, we use SFS to improve the details of DTMs. The CNN-model is trained based on the dataset with 30, 000 samples, which are formed by stereo-photogrammetry derived DTMs and orthoimages using LROC NAC images as well as the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM). We take Chang’E-3 landing site as an example, and use a 1.6 m resolution LROC NAC image and 5 m resolution stereo-photogrammetry derived DTM as input to test the proposed method. We evaluate our DTMs with those from stereo-photogrammetry and deep learning. The result shows the proposed method can generate 1.6 m resolution high-quality DTMs, which can clearly improve the visibility of details of the initial DTM generated from the deep learning method.