ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Share
Publications Copernicus
Download
Citation
Share
Articles | Volume X-5/W2-2025
https://doi.org/10.5194/isprs-annals-X-5-W2-2025-7-2025
https://doi.org/10.5194/isprs-annals-X-5-W2-2025-7-2025
19 Dec 2025
 | 19 Dec 2025

Deep Learning Based Land Use Land Cover Classification On Multi-Sensor Remote Sensing Data

Advaith C A, Shefali Agrawal, Vinay Kumar, Shashi Kumar, and Poonam S. Tiwari

Keywords: Deep Learning, Remote Sensing, Land Use Land Cover Classification, Multi-Sensor Remote Sensing Data

Abstract. Accurate Land Use Land Cover (LULC) classification is critical to monitor urban expansion, resource planning, and environmental planning. This research investigates the combination of multi-modal remote sensing data—optical (Sentinel-2, PRISMA), Synthetic Aperture Radar (Sentinel-1), Digital Elevation Model (Cartosat-derived), and Global Human Settlement Layer (GHSL)—for LULC mapping using a deep learning model. A Vision Transformer-based model, SegFormer with Spatial Attention, was used to leverage the capabilities of the sensors optimally. The research was carried out over Dwarka, Delhi, selected due to its composite LULC patterns of urban, grassland, forest, and water bodies. The data stack was resampled to 10m resolution, segmented to 256×256 tiles, and augmented to improve model generalizability. Semi-manual annotations were employed for supervised training. The model was trained with a combination of cross-entropy and dice loss, and tested with precision, recall, F1-score, and overall accuracy. The results reveal the significant enhancement of classification performance with multi-sensor integration, where the model achieved an overall accuracy of 86.9%. The inclusion of each data source improved performance, especially when combining optical and SAR data with GHSL. This research illustrates the potential of transformer-based models in remote sensing applications, specifically in exploring multi-source satellite data for sophisticated LULC mapping. Future research can include large-scale training datasets, region-specific tuning, or architectural variations to enhance model adaptability and robustness.

Share