ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume X-1/W1-2023
https://doi.org/10.5194/isprs-annals-X-1-W1-2023-175-2023
https://doi.org/10.5194/isprs-annals-X-1-W1-2023-175-2023
05 Dec 2023
 | 05 Dec 2023

ADDRESSING CLASS IMBALANCE FOR TRAINING A MULTI-TASK CLASSIFIER IN THE CONTEXT OF SILK HERITAGE

M. Dorozynski

Keywords: deep learning, image classification, multi-task learning, class imbalance, incomplete labelling, silk heritage

Abstract. Collecting knowledge in the form of databases consisting of images and descriptive texts that represent objects from past centuries is a fundamental part of preserving cultural heritage. In this context, images with known information about depicted artifacts can serve as a source of information for automated methods to complete existing collections. For instance, image classifiers can provide predictions for different object properties (tasks) to semantically enrich collections. A challenge in this context is to train such classifiers given the nature of existing data: Many images do not come along with a class label for all tasks (incomplete samples) and class distributions are commonly imbalanced. In this paper, these challenges are addressed by a multi-task training strategy for a classifier based on a convolutional neural network (SilkNet) that requires images with class labels for the tasks to be learned. The proposed approach can deal with incomplete training examples, while implicitly taking interdependencies between tasks into account. Extensions of the training approach with a focus on hard examples during training as well as the use of an auxiliary feature clustering are developed to counteract problems with class imbalance. Evaluation is conducted based on a dataset consisting of images of historical silk fabrics with labels for five tasks, i.e. silk properties. A comparison of different variants of the classifier shows that the extensions of the training approach significantly improve the classifier’s performance; the average F1-score is up to 5.0% larger, where the largest improvements occur with underrepresented classes of a task (up to +14.3%).