SHapley Additive exPlanations (SHAP) for Landslide Susceptibility Models: Shedding Light on Explainable AI
Keywords: Landslide susceptibility, SHAP, XAI, Machine learning, Risk mitigation
Abstract. This research examines the effectiveness of the SHapley Additive exPlanations (SHAP) approach in enhancing the interpretability of landslide susceptibility models. With the growing popularity of machine learning, we aim to understand how geoenvironmental and physically based factors impact modelling and to explain their interactions. The study focuses on the landslide-prone region of Bhutan and compares the performance of two approaches. The first approach incorporates geoenvironmental factors, while the second integrates both geoenvironmental factors and an additional physically based model. Random Forest (RF) algorithm is used to develop and compare these landslide susceptibility models. Various evaluation metrics, including overall accuracy and precision-recall, are employed to assess the predictive capabilities of each model. The findings reveal the strengths and limitations of both models, providing valuable insights for stakeholders and decision-makers involved in land use planning and disaster preparedness. Ultimately, this research seeks to advance landslide susceptibility modelling by highlighting the role of SHAP and its interaction with geoenvironmental and physically based factors, thereby contributing to more effective risk mitigation strategies.