ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume IV-3/W1
https://doi.org/10.5194/isprs-annals-IV-3-W1-1-2019
https://doi.org/10.5194/isprs-annals-IV-3-W1-1-2019
01 Mar 2019
 | 01 Mar 2019

PIXEL-BASED AND OBJECT-BASED TERRACE EXTRACTION USING FEED-FORWARD DEEP NEURAL NETWORK

H. T. Do, V. Raghavan, and G. Yonezawa

Keywords: Terrace field, feed forward, deep learning, remote sensing, object-based

Abstract. In this paper, we present the identification of terrace field by using Feed-forward back propagation deep neural network in pixel-based and several cases of object-based approaches. Terrace field of Lao Cai area in Vietnam is identified from 5-meter RapidEye image. The image includes 5 bands: red, green, blue, rededge and nir-infrared. Reference data are set of terrace points and nonterrace points, which are generated by randomly selected from reference map. The reference data is separated into three sets: training set for training processing, validation set for generating optimal parameters of deep neural network model, and test set for assessing the accuracy of classification. Six optimal thresholds (T): 0.06, 0.09, 0.12, 0.14, 0.2 and 0.22 are chosen from Rate of Change graph, and then used to generate six cases of object-based classification. Deep neural network (DNN) model is built with 8 hidden layers, input units are 5 bands of RapidEye, and output is terrace and non-terrace classes. Each hidden layer includes 256 units – a large number, to avoid under-fitting. Activation function is Rectifier. Dropout and two regularization parameters are applied to avoid overfitting. Seven terrace maps are generated. The classification results show that the DNN is able to identify terrace field effectively in both pixel-based and object-based approaches. Pixel-based classification is the most accurate approach, achieves 90% accuracy. The values of object-based approaches are 88.5%, 87.3%, 86.7%, 86.6%, 85% and 85.3% correspond to the segmentation thresholds.