ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-4-2022
https://doi.org/10.5194/isprs-annals-V-4-2022-91-2022
https://doi.org/10.5194/isprs-annals-V-4-2022-91-2022
18 May 2022
 | 18 May 2022

LEARNING SOCIAL COMPLIANT MULTI-MODAL DISTRIBUTIONS OF HUMAN PATH IN CROWDS

X. Shi, H. Zhang, W. Yuan, D. Huang, Z. Guo, and R. Shibasaki

Keywords: Pedestrian Trajectory Prediction, Social Interactions, Multi-modal, LSTM, Deep Learning

Abstract. Long-term human path forecasting in crowds is critical for autonomous moving platforms (like autonomous driving cars and social robots) to avoid collision and make high-quality planning. It is not easy for prediction systems to successfully take into account social interactions and predict a distribution of future possible path in a highly interactive and dynamic circumstance. In this paper, we develop a data-driven model for long-term trajectory prediction, which naturally takes into account social interactions through a spatio-temporal graph representation and predicts multi-modes of future trajectories. Different from generative adversarial network (GAN) based models which generate samples and then provide distributions of samples, we use mixture density functions to describe human motion and intuitively map the distribution of future path with explicit densities. To prevent the model from collapsing into a single mode and truly capture the intrinsic multi-modality, we further use a Winner-Takes-All (WTA) loss instead of computing loss over all modes. Extensive experiments over several trajectory prediction benchmarks demonstrate that our method is able to capture the multi-modality of human motion and forecast the distributions of plausible futures in complex scenarios.