DEEP RESIDUAL LEARNING FOR SINGLE-IMAGE SUPER-RESOLUTION OF MULTI-SPECTRAL SATELLITE IMAGERY
Keywords: Single-Image Super-Resolution, Convolutional Neural Networks, Deep Learning, Residual Learning, Remote Sensing, Sentinel-2
Abstract. Analyzing optical remote sensing imagery depends heavily on their spatial resolution. At the same time, this data is adversely affected by fixed sensor parameters and environmental influences. Methods for increasing the quality of such data and concomitantly optimizing its information content are, thus, in high demand. In particular, single-image super-resolution (SISR) approaches aim to achieve this goal solely by observing the individual images.
We propose to adapt a generic deep residual neural network architecture for SISR to deal with the special properties of remote sensing satellite imagery, especially taking into account the different spatial resolutions of individual Sentinel-2 bands, i.e., ground sampling distances of 20 m and 10 m. As a result, this method is able to increase the perceived resolution of the 20 m channels and mesh all spectral bands. Experimental evaluation and ablation studies on large datasets have shown superior performance compared to the state-of-the-art and that the model is not bound by its capacity.