ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume X-2/W1-2024
https://doi.org/10.5194/isprs-annals-X-2-W1-2024-31-2024
https://doi.org/10.5194/isprs-annals-X-2-W1-2024-31-2024
16 Dec 2024
 | 16 Dec 2024

Fusion of Deep Learning-based and Spectral Features for Hyperspectral Image Analysis

Evgeny Myasnikov

Keywords: Hyperspectral Images, Feature Fusion, Classification, Dimensionality reduction, Resnet18

Abstract. Currently, deep neural networks have become one of the most effective tools in computer vision. However, in the field of hyperspectral remote sensing image analysis, their practical application is limited, as it requires manual labeling of a large amount of data. Since this process is time-consuming and expensive, an attractive option is the use of pre-trained neural networks designed to work with color images. However, to take advantage of hyperspectral images, such neural networks must be equipped with some mechanism to take into account the detailed spectral information contained in such images.
In this work, we propose to combine deep features computed using pre-trained convolutional neural networks (specifically Resnet18) with spectral features of hyperspectral images. The proposed scheme works on the basis of combining the selected type of distances (Euclidean distance, spectral angle, Hellinger divergence) in spectral and embedding spaces with the subsequent synthesis of features in a space of a given dimensionality. The proposed scheme does not require any training, except for the selection of several parameters (spatial window size, dimensionality of the synthesized space, fusion coefficient). Experiments conducted on known hyperspectral scenes (Indian Pines, Salinas, Pavia University, Kennedy Space Center) show the advantages of the proposed approach. The issue of train-test sample splitting is considered.