Learning When to Use Adaptive Adversarial Image Perturbations Against Autonomous Vehicles
The deep neural network (DNN) models for object detection using camera images are widely adopted in autonomous vehicles. However, DNN models are shown to be susceptible to adversarial image perturbations. In the existing methods of generating the adversarial image perturbations, optimizations take each incoming image frame as the decision variable to generate an image perturbation. Therefore, given a new image, the typically computationally expensive optimization needs to start over as there is no learning between the independent optimizations. Very few approaches have been developed for attacking online image streams while considering the underlying physical dynamics of autonomous vehicles, their mission, and the environment. We propose a multi-level stochastic optimization framework that monitors an attacker’s capability of generating adversarial perturbations. Based on this capability level, a binary decision attack/not attack is introduced to enhance the effectiveness of the attacker. We evaluate our proposed multi-level image attack framework using simulations for vision-guided autonomous vehicles and actual tests with a small indoor drone in an office environment. The results show our method’s capability to generate the image attack in real-time while monitoring when the attacker is proficient, given state estimates.
Choosing when to use the adversarial image perturbation.
Relevant Paper:
Yoon, Hyung-Jin, Hamidreza Jafarnejadsani, and Petros Voulgaris. “Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles.” IEEE Robotics and Automation Letters (2023).