Mapcooper: A Communication-Efficient Collaborative Perception Framework via Map Alignment
Keywords: V2X, Cooperative perception, HD map, Autonomous driving
Abstract. V2I collaborative perception improves awareness of the dynamic driving environment by exchanging multi-viewpoint information through communication, establishing itself as a key element of intelligent transportation systems. Despite its advantages, this method requires a balance between communication bandwidth and perception performance. To address this challenge, we propose a map-mask designed to align with perceptual spatial features, enabling precise background filtering to isolate critical areas for communication. During the sender’s compression phase, the map-mask filters out background elements and extracts key features from critical areas, significantly reducing communication bandwidth consumption. During the receiver’s decompression phase, the map-mask restores scene context and enhances spatial information surrounding critical areas, ensuring the preservation of perception performance. Based on this map alignment, we develop Mapcooper, a unified collaborative perception framework that optimizes the balance between communication bandwidth and perception performance. We evaluated Mapcooper’s effectiveness via extensive experimentation using the large-scale V2X-Seq-SPD dataset. The results demonstrate that Mapcooper outperforms existing collaborative perception approaches with respect to perceptual accuracy while minimizing communication transmission costs.