Efficient Adversarial Scenario Test for Autonomous Vehicles
Author:
Affiliation:

Funding:

This work is supported by Shenzhen Basic Key Research Project (JCYJ20200109115414354,JCYJ20200109115403807) and Foundation of Guangdong Province of China (2020B515130004,2023A1515011813)

Ethical statement:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
    Abstract:

    In the field of autonomous driving safety research and application, the limitations of limited testing mileage and exposure to only a single hazardous scenario hinder the improvement of autonomous driving safety performance. To address these issues, testing with adversarial scenarios is considered crucial. However, existing studies utilize generic optimization algorithms as frameworks, resulting in a wastage of computational resources in exploring the parameter space, thereby leading to low efficiency. Moreover, under the constraint of computational cost, these algorithms may not be able to test a sufficient number of diverse failure samples, especially in complex environments. Adversarial scenario testing in complex environments faces three major challenges: information scarcity, sparse distribution of adversarial samples in a vast parameter space, and the difficulty in balancing exploration and exploitation during the search process. To tackle these challenges, this paper proposes an efficient framework for adversarial scenario testing. This framework employs a surrogate model to gather more information about the parameter space, selects small samples to overcome the sparse event constraints in the vast space, and focuses on the unknown regions and adversarial samples for targeted search and update, thereby achieving a balance between exploration and exploitation. Experimental results demonstrate that the proposed method in this paper exhibits a search efficiency four times higher than random sampling and more than double the efficiency compared to general genetic algorithms. Additionally, with a limited number of simulation test runs, it generates a greater number of adversarial test cases that are likely to cause the tested autonomous driving system to fail. Notably, the proposed method can identify many outlier adversarial samples, unveiling failure modes that existing algorithms fail to recognize. Furthermore, the proposed method can swiftly and comprehensively identify the vulnerable scenarios of the tested algorithm, providing support for the testing, validation, and iterative upgrade of autonomous driving algorithms.

    Reference
    Related
    Cited by
Get Citation

SANG Ming, JIANG Zhengmin, LI Huiyun. Efficient Adversarial Scenario Test for Autonomous Vehicles[J]. Journal of Integration Technology,2024,13(2):15-28

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
History
  • Received:July 26,2023
  • Revised:July 26,2023
  • Adopted:November 20,2023
  • Online: November 20,2023
  • Published:
Baidu
map