ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Articles | Volume I-3
23 Jul 2012
 | 23 Jul 2012


L. Lelégard, E. Delaygue, M. Brédif, and B. Vallet

Keywords: Motion blur, airborne imagery, multi-channel imagery, Fourier transform, image restoration.

Abstract. This article describes a pipeline developed to automatically detect and correct motion blur due to the airplane motion in aerial images provided by a digital camera system with channel-dependent exposure times. Blurred images show anisotropy in their Fourier Transform coefficients that can be detected and estimated to recover the characteristics of the motion blur. To disambiguate the anisotropy produced by a motion blur from the possible spectral anisotropy produced by some periodic patterns present in a sharp image, we consider the phase difference of the Fourier Transform of two channel shot with different exposure times (i.e. with different blur extensions). This is possible because of the deep correlation between the three visible channels ensures phase coherence of the Fourier Transform coefficients in sharp images. In this context, considering the phase difference constitutes both a good detector and estimator of the motion blur parameters. In order to improve on this estimation, the phase difference is performed on local windows in the image where the channels are more correlated. The main lobe of the phase difference, where the phase difference between two channels is close to zero actually imitates an ellipse which axis ratio discriminates blur and which orientation and minor axis give respectively the orientation and the blur kernel extension of the long exposure-time channels. However, this approach is not robust to the presence in the phase difference of minor lobes due to phase sign inversions in the Fourier transform of the motion blur. They are removed by considering the polar representation of the phase difference. Based on the blur detection step, blur correction is eventually performed using two different approaches depending on the blur extension size: using either a simple frequency-based fusion for small blur or a semi blind iterative method for larger blur. The higher computing costs of the latter method make it only suitable for large motion blur, when the former method is not applicable.