Assessing Terrorism and Artificial Intelligence in 2050
William D. Harris is a U.S. Army Special Forces Officer with six deployments for operations in Iraq and Syria and experience working in Jordan, Turkey, Saudi Arabia, Qatar, Israel, and other regional states. He has commanded from the platoon to battalion level and served in assignments with 1st Special Forces Command, 5th Special Forces Group, 101st Airborne Division, Special Operations Command—Central, and 3rd Armored Cavalry Regiment. William holds a Bachelor of Science from United States Military Academy, a Master of Arts from Georgetown University’s Security Studies Program, a Masters from the Command and General Staff College, and a Masters from the School of Advanced Military Studies. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.
Title: AssessingTerrorism and Artificial Intelligence in 2050
Date Originally Written: December 14, 2022.
Author and / or Article Point of View: The author is an active-duty military member who believes that terrorists will pose increasing threats in the future as technology enables their operations.
Summary: The proliferation of artificial intelligence (AI) will enable terrorists in at least three ways.First, they will be able to overcome their current manpower limitations in the proliferation of propaganda to increase recruitment.Second, they will be able to use AI to improve target reconnaissance.Third, terrorists can use AI to improve their attacks, including advanced unmanned systems and biological weapons.
Text: Recent writing about the security implications of artificial intelligence (AI) has focused on the feasibility of a state like China or others with totalitarian aspirations building a modern panopticon, combining ubiquitous surveillance with massive AI-driven data processing and pattern recognition[1].For years, other lines of research into AI have analyzed the application of AI to fast-paced conventional warfare.Less has focused on how AI could help the sub-state actor, the criminal, insurgent, or terrorist.Nevertheless, history shows that new technologies have never given their user an enduring and decisive edge.Either the technology proliferates or combatants find countermeasures.Consequently, understanding how AI technology could enable terrorists is a first step in preventing future attacks.
The proliferation of AI has the potential to enable terrorists similar to the way that the proliferation of man-portable weapons and encrypted communications have enabled terrorists to become more lethal[2].Terrorists, or other sub-state entrepreneurs of violence, may be able to employ AI to solve operational problems.This preliminary analysis will look at three ways that violent underground groups could use AI in the coming decades: recruitment, reconnaissance, and attack.
The advent of mass media allowed the spread of radical ideological tracts at a pace that led to regional and then global waves of violence.In 1848, revolutionary movements threatened most of the states in Europe.Half a century later, a global yet diffuse anarchist movement led to the assassination of five heads of state and the beginning of World War I[3].Global revolutionary movements during the Cold War and then the global Islamist insurgency against the modern world further capitalized on the increasing bandwidth, range, and volume of communication[4].The sleek magazine and videos of the Islamic State are the latest edition of the terrorists’ use of modern communications to craft and distribute a message intended to find and radicalize recruits.If they employ advanced AI, terrorist organizations will be able to increase the production rate of quality materials in multiple languages, far beyond what they are currently capable of producing with their limited manpower.The recent advances in AI, most notably with OpenAI’s Chatbot, demonstrate that AIs will be capable of producing quality materials. These materials will be increasingly sophisticated and nuanced in a way to resonate with vulnerable individuals, leading to increased radicalization and recruitment[5].
Once a terrorist organization has recruited a cadre of fighters, then it can begin the process of planning and executing a terrorist attack, a key phase of which is reconnaissance.AI could be an important tool here, enabling increased collection and analysis of data to find patterns of life and security vulnerabilities.Distributed AI would allow terrorists conducting reconnaissance to collect and process vast quantities of information as opposed to relying on purely physical surveillance[6].This AI use will speed up the techniques of open source intelligence collection and analysis, enabling the organization to identify the pattern of life of the employees of a targeted facility, and to find gaps and vulnerabilities in the security.Open-source imagery and technical information could provide valuable sources for characterizing targets.AI could also drive open architecture devices that enable terrorists to collect and access all signals in the electromagnetic spectrum and sound waves[7].In the hands of skilled users, AI will able to enable the collection and analysis of information that was previously unavailable, or only available to the most sophisticated state intelligence operations.Moreover, as the systems that run modern societies increase in complexity, that complexity will create new unanticipated failure modes, as the history of computer hacking or even the recent power grid attacks demonstrate[8].
After conducting the target reconnaissance, terrorists could employ AI-enabled systems to facilitate or execute the attack.The clearest example would be autonomous or semi-autonomous vehicles.These vehicles will pose increasing problems for facilities protection in the future.However, there are other ways that terrorists could employ AI to enable their attacks.One idea would be to use AI agents to identify how they are vulnerable to facial recognition or other forms of pattern recognition.Forewarned, the groups could use AI to generate deception measures to mislead security forces.Using these AI-enabled disguises, the terrorists could conduct attacks with manned and unmanned teams.The unmanned teammates could conduct parts of the operation that are too distant, dangerous, difficult, or restricted for their human teammates to action.More frighteningly, the recent successes in applying machine learning and AI to understand deoxyribonucleic acid (DNA) and proteins could be applied to make new biological and chemical weapons, increasing lethality, transmissibility, or precision[9].
Not all terrorist organizations will develop the sophistication to employ advanced AI across all phases of the organizations’ operations.However, AI will continue and accelerate the arms race between security forces and terrorists.Terrorists have applied most other human technologies in their effort to become more effective.They will be able to apply AI to accelerate their propaganda and recruitment; target selection and reconnaissance; evasion of facial recognition and pattern analysis; unmanned attacks against fortified targets; manned-unmanned teamed attacks; and advanced biological and chemical attacks.
One implication of this analysis is that the more distributed AI technology and access become, the more it will favor the terrorists.Unlike early science fiction novels about AI, the current trends are for AI to be distributed and more available unlike the centralized mainframes of earlier fictional visions.The more these technologies proliferate, the more defenders should be concerned.
The policy implications are that governments and security forces will continue their investments in technology to remain ahead of the terrorists.In the west, this imperative to exploit new technologies, including AI, will increasingly bring the security forces into conflict with the need to protect individual liberties and maintain strict limits on the potential for governmental abuse of power.The balance in that debate between protecting liberty and protecting lives will have to evolve as terrorists grasp new technological powers.
Endnotes:
[1]For example, see “The AI-Surveillance Symbiosis in China: A Big Data China Event,” accessed December 16, 2022, https://www.csis.org/analysis/ai-surveillance-symbiosis-china-big-data-china-event; “China Uses AI Software to Improve Its Surveillance Capabilities | Reuters,” accessed December 16, 2022, https://www.reuters.com/world/china/china-uses-ai-software-improve-its-surveillance-capabilities-2022-04-08/.
[2]Andrew Krepinevich, “Get Ready for the Democratization of Destruction,” Foreign Policy, n.d., https://foreignpolicy.com/2011/08/15/get-ready-for-the-democratization-of-destruction/.
[3]Bruce Hoffman, Inside Terrorism, Columbia Studies in Terrorism and Irregular Warfare (New York: Columbia University Press, 2017).
[4]Ariel Victoria Lieberman, “Terrorism, the Internet, and Propaganda: A Deadly Combination,” Journal of National Security Law & Policy 9, no. 95 (April 2014): 95–124.
[5]See https://chat.openai.com/
[6]“The ABCs of AI-Enabled Intelligence Analysis,” War on the Rocks, February 14, 2020, https://warontherocks.com/2020/02/the-abcs-of-ai-enabled-intelligence-analysis/.
[7]“Extracting Audio from Visual Information,” MIT News | Massachusetts Institute of Technology, accessed December 16, 2022, https://news.mit.edu/2014/algorithm-recovers-speech-from-vibrations-0804.
[8]Miranda Willson, “Attacks on Grid Infrastructure in 4 States Raise Alarm,” E&E News, December 9, 2022, https://www.eenews.net/articles/attacks-on-grid-infrastructure-in-4-states-raise-alarm/; Dietrich Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations (Reading, Mass: Perseus Books, 1996).
[9]Michael Eisenstein, “Artificial Intelligence Powers Protein-Folding Predictions,” Nature 599, no. 7886 (November 23, 2021): 706–8, https://doi.org/10.1038/d41586-021-03499-y.
This content was originally published here.