SMARTNav: A Visual-Inertial Dataset for Reliable Robotic State Estimation
Keywords: Visual-Inertial Odometry, SLAM, Dataset, Autonomous Navigation, LiDAR
Abstract. Visual-inertial navigation has become a cornerstone for deploying robots in diverse environments. Despite significant progress, current approaches may easily fail to deliver reliable and robust navigation for industrial applications. Therefore, evaluating these methods using various datasets under challenging operational conditions is essential to ensure safe integration into robotic platforms. As such, this paper aims to enrich the availability of navigation datasets by introducing SMARTNav, which includes raw data obtained from stereo cameras and IMU sensors mounted on both ground and aerial robots. These robots were deployed in various operational scenarios across different environments, such as greenhouses, urban streets, indoor spaces, and near-building areas. The data includes challenges of navigating in GPS-denied areas, repetitive structures, featureless environments, and adverse lighting conditions. In order to provide corresponding ground-truth for each sequence, different techniques were deployed, such as Motion Capture System, Real Time Kinematics (RTK), and dense LiDAR-based Simultaneous Localization and Mapping (SLAM). Consequently, the resulting dataset can be used to address and validate key issues in vision-based state estimation, localization, and mapping for industrial applications. The SMARTNav dataset is accessible at: https://saxionmechatronics.github.io/smartnav-dataset/.
