MACHINE LEARNING FOR CLASSIFICATION OF AN ERODING SCARP SURFACE USING TERRESTRIAL PHOTOGRAMMETRY WITH NIR AND RGB IMAGERY
Keywords: Terrestrial Photogrammetry, Structure from Motion, Surface Classification, Machine Learning, High Mountain Environment
Abstract. Increasingly advanced and affordable close-range sensing techniques are employed by an ever-broadening range of users, with varying competence and experience. In this context a method was tested that uses photogrammetry and classification by machine learning to divide a point cloud into different surface type classes. The study site is a peat scarp 20 metres long in the actively eroding river bank of the Rotmoos valley near Obergurgl, Austria. Imagery from near-infra red (NIR) and conventional (RGB) sensors, georeferenced with coordinates of targets surveyed with a total station, was used to create a point cloud using structure from motion and dense image matching. NIR and RGB information were merged into a single point cloud and 18 geometric features were extracted using three different radii (0.02 m, 0.05 m and 0.1 m) totalling 58 variables on which to apply the machine learning classification. Segments representing six classes, dry grass, green grass, peat, rock, snow and target, were extracted from the point cloud and split into a training set and a testing set. A Random Forest machine learning model was trained using machine learning packages in the R-CRAN environment. The overall classification accuracy and Kappa Index were 98% and 97% respectively. Rock, snow and target classes had the highest producer and user accuracies. Dry and green grass had the highest omission (1.9% and 5.6% respectively) and commission errors (3.3% and 3.4% respectively). Analysis of feature importance revealed that the spectral descriptors (NIR, R, G, B) were by far the most important determinants followed by verticality at 0.1 m radius.