Operational Feasibility of Adversarial Attacks Against Artificial Intelligence | RAND
Research Questions
A large body of academic literature describes myriad attack vectors and suggests that most of the U.S. Department of Defense’s (DoD’s) artificial intelligence (AI) systems are in constant peril. However, RAND researchers investigated adversarial attacks designed to hide objects (causing algorithmic false negatives) and found that many attacks are operationally infeasible to design and deploy because of high knowledge requirements and impractical attack vectors. As the researchers discuss in this report, there are tried-and-true nonadversarial techniques that can be less expensive, more practical, and often more effective. Thus, adversarial attacks against AI pose less risk to DoD applications than academic research currently implies. Nevertheless, well-designed AI systems, as well as mitigation strategies, can further weaken the risks of such attacks.
Key Findings
- Adversarial attacks designed to hide objects from AI pose less risk to DoD applications than academic research currently implies.
- In the real world, such adversarial attacks are difficult to design and deploy because of high knowledge requirements and infeasible attack vectors; there are often less expensive, more practical, and more effective nonadversarial techniques available.
- Fusing data and predictions across sensor modalities, signal-sampling rates, and image resolution can further mitigate the risk of adversarial attacks against AI.
- DoD should assess how at-risk its AI models are to adversarial attacks by considering how adversaries can feasibly influence models. It should also assess how knowledge leaks about models can affect attack efficacy and estimate the costs associated with adversary actions.
- DoD should maintain situational awareness of academic state-of-the-art techniques to attack AI in real-world scenarios and understand how these technologies feasibly affect concepts of operation for both itself and its adversaries.
- DoD should develop robust AI models, preprocessing techniques, and proper data fusion systems to vastly increase the resource costs to an adversary to perform an attack.
- DoD should invest in responsive support teams for AI systems to quickly detect, identify, and respond to adversarial threats.
Research conducted by
This research was sponsored by the Office of the Secretary of Defense and conducted within the Acquisition and Technology Policy Center of the RAND National Security Research Division (NSRD).
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.
This content was originally published here.