Verification framework uncovers safety lapses in open-source self-driving system


Using a newly developed verification framework, researchers have uncovered safety limitations in open-source self-driving systems during high-speed movements and sudden cut-ins, raising concerns for real-world deployments.
In this study, Research Assistant Professor Duong Dinh Tran from Japan Advanced Institute of Science and Technology (JAIST) and his team, including Associate Professor Takashi Tomita and Professor Toshiaki Aoki at JAIST, decided to put the open-source autonomous driving system, Autoware, through a rigorous verification framework, revealing potential safety limitations in critical traffic situations.
To thoroughly check how safe Autoware is, the researchers built a special virtual testing system. This system, explained in their study published in the journal IEEE Transactions on Reliability, acted like a digital proving ground for self-driving cars.
Using a language called AWSIM-Script, they could create simulations of various tricky traffic situations—real-world dangers that car safety experts in Japan have identified. During these simulations, a tool called Runtime Monitor kept a detailed record of everything that happened, much like the black box in an airplane.
Finally, another verification program, AW-Checker, analyzed these recordings to see if Autoware followed the rules of the road, as defined by the Japan Automobile Manufacturers Association (JAMA) safety standard. This standard provides a clear and structured way to evaluate the safety of autonomous driving systems (ADSs).
Researchers focused on three particularly dangerous and frequently encountered scenarios defined by the JAMA safety standard: cut-in (a vehicle abruptly moving into the ego vehicle’s lane), cut-out (a vehicle ahead suddenly changing lanes), and deceleration (a vehicle ahead suddenly braking). They compared Autoware’s performance against the JAMA’s “careful driver model,” a benchmark representing the minimum expected safety level for ADSs.
These experiments revealed that Autoware did not consistently meet the minimum safety requirements as defined by the careful driver model. As Dr. Tran explained, “Experiments conducted using our framework showed that Autoware was unable to consistently avoid collisions, especially during high-speed driving and sudden lateral movements by other vehicles, when compared to a competent and cautious driver model.”
One significant reason for these failures appeared to be errors in how Autoware predicted the movement of other vehicles. The system often predicted slow and gradual lane changes. However, when faced with vehicles making fast, aggressive lane changes (like in the cut-in scenario with high lateral velocity), Autoware’s predictions were inaccurate, leading to delayed braking and subsequent collisions in the simulations.
Interestingly, the study also compared the effectiveness of different sensor setups for Autoware. One setup used only lidar, while the other combined data from both lidar and cameras. Surprisingly, the lidar-only mode generally performed better in these challenging scenarios than the camera-lidar fusion mode. The researchers suggest that inaccuracies in the machine learning–based object detection of the camera system might have introduced noise, negatively impacting the fusion algorithm’s performance.
These findings have important real-world implications, as some customized versions of Autoware were already deployed on public roads to provide autonomous driving services. “Our study highlights how a runtime verification framework can effectively assess real-world autonomous driving systems like Autoware.
“Doing so helps developers identify and correct potential issues both before and after the system is deployed, ultimately fostering the development of safer and more reliable autonomous driving solutions for public use,” noted Dr. Tran.
While this study provides valuable insights into Autoware’s performance in specific traffic disturbances on non-intersection roads, the researchers plan to expand their work to include more complex scenarios, such as those at intersections and involving pedestrians. They also aim to investigate the impact of environmental factors like weather and road conditions in future studies.
More information:
Duong Dinh Tran et al, Safety Analysis of Autonomous Driving Systems: A Simulation-Based Runtime Verification Approach, IEEE Transactions on Reliability (2025). DOI: 10.1109/TR.2025.3561455
Citation:
Verification framework uncovers safety lapses in open-source self-driving system (2025, May 23)
retrieved 23 May 2025
from https://techxplore.com/news/2025-05-verification-framework-uncovers-safety-lapses.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.
If you want to read more Like this articles, you can visit our Science category.