Multi-source driving signals
V2Xverse provides synchronous driving-related signals from vehicles and road-side units across various urban scenarios, and enables communications between agents.
Vehicle-to-everything-aided autonomous driving (V2X-AD) has a huge potential to provide a safer driving solution. Despite extensive researches in transportation and communication to support V2X-AD, the actual utilization of these infrastructures and communication resources in enhancing driving performances remains largely unexplored. This highlights the necessity of collaborative autonomous driving; that is, a machine learning approach that optimizes the information sharing strategy to improve the driving performance of each vehicle. This effort necessitates two key foundations: a platform capable of generating data to facilitate the training and testing of V2X-AD, and a comprehensive system that integrates full driving-related functionalities with mechanisms for information sharing. From the platform perspective, we present V2Xverse, a comprehensive simulation platform for collaborative autonomous driving. This platform provides a complete pipeline for collaborative driving: multi-agent driving dataset generation scheme, codebase for deploying full-stack collaborative driving system, closed-loop driving performance evaluation with scenario customization. From the system perspective, we introduce CoDriving, a novel end-to-end collaborative driving system that properly integrates V2X communication over the entire autonomous pipeline, promoting driving with shared perceptual information. The core idea is a novel driving-oriented communication strategy, that is, selectively complementing the driving-critical regions in single-view using sparse yet informative perceptual cues. Leveraging this strategy, CoDriving improves driving performance while optimizing communication efficiency. We make comprehensive benchmarks with V2Xverse, analyzing both modular performance and closed-loop driving performance. Experimental results show that CoDriving: i) significantly improves the driving score by 62.49% and drastically reduces the pedestrian collision rate by 53.50% compared to the SOTA end-to-end driving method, and ii) achieves sustaining driving performance superiority over dynamic constraint communication conditions.
V2Xverse simulates the complete V2X-AD driving pipeline, incorporating various driving functionalities and delivering extensive driving annotations. It facilitates both the offline benchmark generation and online closed-loop driving performance evaluation.
V2Xverse provides synchronous driving-related signals from vehicles and road-side units across various urban scenarios, and enables communications between agents.
V2Xverse provides driving evaluation in Carla Town05, covering 67 test routes and hundreds of scenario trigger points. The ego vehicle will trigger a specific scenario (e.g. pedestrians or vehicles suddenly appear behind obstacles) when approaching a trigger point.
CoDriving comprises two components: end-to-end single-agent autonomous driving, which transforms the sensor inputs into driving actions, and driving-oriented collaboration, which enhances the single-agent features by aggregating the driving-critical perceptual features shared through communication. The benefits propagate from the perception module to the entire driving pipeline, enhancing all driving signals.
A pedestrian ahead is invisible to the ego vehicle due to the occlusion caused by two vehicles. Compared to single-agent autonomous driving, CoDriving avoids catastrophic collision by utilizing the shared visual information from the road-side unit.
CoDriving adapts complex traffic conditions in urban navigation tasks. We employ V2Xverse to simulate safety-critical scenarios. For example, we switch traffic lights into green to encourage traffic dynamics, and make the scenes even more challenging by introducing "crazy" pedestrians who disregard traffic rules.
@article{liu2024codriving,
title={Towards Collaborative Autonomous Driving: Simulation Platform and End-to-End System},
author={Liu, Genjia and Hu, Yue and Xu, Chenxin and Mao, Weibo and Ge, Junhao and Huang, Zhengxiang and Lu, Yifan and Xu, Yinda and Xia, Junkai and Wang, Yafei and others},
journal={arXiv preprint arXiv:2404.09496},
year={2024}
}