Temporal ViT-U-Net Tandem Model: Enhancing Multi-Sensor Land Cover Classification Through Transformer-Based Utilization of Satellite Image Time Series
Keywords: Semantic segmentation, Land cover classification, Multi-Sensor remote sensing, Vision transformer, Satellite image time series
Abstract. Semantic segmentation is essential in the field of remote sensing because it is used for various applications such as environmental monitoring and land cover classification. Recent advancements aim to collectively classify data from diverse sensors and epochs to improve predictive accuracy. With the availability of vast Satellite Image Time Series (SITS) data, supervised deep learning methods, such as Transformer models, become viable options. This paper introduces the Temporal Vision Transformer(ViT), designed to extract features from SITS. These features, capturing the temporal patterns of land cover classes, are integrated with features derived from aerial imagery to improve land cover classification. Drawing inspiration from the success of transformers in Natural language processing (NLP), Temporal ViT concurrently extracts spatial and temporal information from SITS data using tailored positional encoding strategies. The proposed approach fosters comprehensive feature learning across both domains, facilitating seamless integration of encoded data from SITS into aerial images. Furthermore, a training strategy is proposed that supports the Temporal ViT to focus on classes with a changing appearance over the year. Extensive experiments carried out in this work indicate the enhanced classification performance of Temporal ViT compared to existing state-of-the-art techniques for multi-modal land cover classification. Our model achieves a 3.8% increase in the mean IoU compared to the network solely relying on aerial images.