Introduction - What is a Cyber Arms Race?
The Cyber Arms Race can trace its roots to 1949 when the Soviet Union tested their first nuclear weapon. This development sparked the Cold War nuclear arms race; a competition between the United States and Soviet Union to develop and stockpile nuclear weapons. The nuclear arms race led to competition for technological superiority. This included satellite and space technology. The beginning of this technological race was the Union of Soviet Socialist Republics’ (USSR) launch of the Sputnik Satellite in 1957, which led the United States achieving dominance by being the first nation to land on the moon in 1969. Contemporaneously, the U.S. Navy was developing a series of computers called the Naval Tactical Data System for air battle management, the U.S. Air Force was trying to create an early warning and air defense system called Semi-Automatic Ground Environment. Meanwhile, the Department of Defense was working with civilian-academic partners such as IBM and Honeywell to develop the first transistors, integrated circuits, and universal automatic computers (UNIVAC) (Leese, 2023). This series of developments was the beginning of a computer arms race.
In this paper, I will discuss the United States Cyber Arms Race with other nation states, focusing on China. More importantly, this paper will concentrate on the Cyber Arms Race in the realm of Artificial Intelligence (AI). I will discuss the rise of Artificial Intelligence in warfare, explore its potential applications, and examine the risks associated with its unchecked and unregulated use. Lastly, I will argue that while the United States Department of Defense must continue to develop AI systems, it must also continuously educate and equip its users with the tools to understand and critically analyze systems. Additionally, I will show that our nation must continue to reassess, reevaluate, and regulate the use of AI systems.
Applications of Artificial Intelligence in the Battlefield
The continuous integration and development of new cyber and AI tools are changing the battlefield. These tools have rippling effects through the multi-domain operational environment. In 2020, the Army conducted its first artillery strike with the assistance of an AI model that identified potential targets during an exercise with the 18th Airborne Corps (Manson, 2024). In November 2024, Scale AI released “Defense Llama,” its first configured and fine-tuned large language model based on Meta’s Llama 3 LLM, for use in multiple classified environments; it is already being used by combatant commands and other military groups (Vincent, 2024). It is widely known that machine learning algorithms like Naive Bayes, Support Vector Machines, and Neural Networks can be used to detect anomalous behavior in endpoint detection responses for cyber defense. However, DeepLocker represents an entirely different threat: this malware uses its Deep Neural Network model with “trigger conditions”-like facial and voice recognition-to execute an offensive payload (Osborne, 2019).
When considering the United States’ adversaries, China’s People’s Liberation Army (PLA) has been making leaps in using AI for warfare. In May 2025, a report published by a university in northwest China showed that the PLA used DeepSeek’s AI model to generate automatic military simulation scenarios to challenge how commanders make decisions in battle. According to the study, the AI simulation system could generate 10,000 warfare scenarios in just 48 seconds; that the same situation would take commanders 48 hours to plan. The official PLA newspaper commented, “The traditional principle of ‘winning through tactics’ will be replaced by ‘winning through algorithms’” (Chen, 2025). At the Zhuhai Airshow in November 2024, Norinco, one of China’s main defense manufacturers, revealed multiple combat systems with AI capabilities. One of the systems revealed was the AI-Enabled Synthetic Brigade, which combines advanced armored vehicles, swarming drones, loitering munitions, and electronic warfare tools (Tye & Singer, 2025) into one unit. As the utilization of AI in warfare continues to expand globally, it is imperative to critically examine the potential dangers associated with its deployment.
The Risks of Using Artificial Intelligence on the Battlefield and Potential Mitigations
It is evident that nations are actively vying for a competitive advantage in the field of artificial intelligence. If pursued improperly and unethically, the quest for superior AI can result in severe and perilous consequences. A 2024 RAND Corporation report showed that AI military systems could increase risk of unintended conflict where there is no human judgement (Ansari, 2025). A significant concern regarding AI safety is black box AI, which characterizes most current systems. A black box AI system obscures its internal workings from the user, either by design or as an unintended consequence. Users see the input and output but don’t see the inner working mechanisms to produce the outputs. This can lead to great, but sometimes unexplainable, results, while simultaneously hiding security vulnerabilities, biases, or privacy violations (Kosinski, 2024). The lack of transparency in an AI system is usually due to three reasons: (1) opacity based on intentional concealment; (2) opacity due to technological illiteracy; and (3) opacity through cognitive mismatch.
For the U.S. military, an AI system could be all three (Sullivan, 2024). AI systems used by the U.S. military can be trained on and used to generate classified materials, therefore, systems must be somewhat concealed. The second reason for concealment is unintentional; it stems from a lack of understanding in the fundamental math and theory AI systems are based on. The third reason for concealment is that the AI system is too complex and greatly differs from the way humans think; therefore, it is better to conceal its actions. The last two reasons pose a major security threat in employing AI systems in warfare.
In January 2025, a report published by Cybersecurity and Infrastructure Security Agency (CISA), Defense Advanced Research Projects Agency (DARPA), Office of the Secretary of Defense for Research and Engineering (OUSD R&E), and the National Security Agency (NSA) identified the technological illiteracy problem as the software understanding gap. The report defined ‘software understanding’ as “the practice of constructing and assessing software-controlled systems to verify their functionality, safety, and security across all conditions (normal, abnormal, and hostile).” The software understanding gap refers to the the general lack of knowledge and comprehension among mission owners and operators regarding the software they use, a consequence of technology manufacturers building software that outpaces the users. This lack of understanding can lead to damage in national critical infrastructure and pose national security risks, particularly when mission owners or operators lack the technical capability to predict, prevent, or discover security flaws in the tools they are using. In 2018, the WannaCry ransomware halted the production of chips at the Taiwan Semiconductor Manufacturing Company Limited for several days when a supplier introduced an infected tool (CISA et al., 2025), demonstrating that the software understanding gap exists across all industries.
It is critical to provide mission owners and operators with proper training, education, and critical thinking skills to analyze and prevent inherent security issues with their software. DARPA’s initiation of programs focused on mathematically provable methods underscores the importance of educating the force on fundamental software knowledge (CISA et al., 2025). While testing software may provide insight on how a system behaves in different scenarios and how the software functions internally, tests are limited to the testing environment and cannot cover every scenario. Mathematical proof and analysis offer the most reliable means of defining system behavior. Software that can be validated through mathematical proofs will be essential for AI-powered systems. This approach is particularly valuable for understanding AI, as it begins with clearly defined fundamental rules and proceeds through a logical progression. Rather than an AI system with many capabilities that may have unpredictable behavior, it is more important to have a reliable AI system we understand and know is dependable. (Rajan and Ring, 2025).
This is also the issue with AI systems that are concealed through cognitive mismatch. Today, the system responsible for significant AI advances is the deep learning Artificial Neural Network (ANN). Massive generative AI models, such as ChatGPT, use ANNs, which are difficult to understand for two reasons. First, their complexity: they use large amounts of data, and use millions, occasionally billions, of parameters. Therefore, predictions can sometimes require millions of calculations which are virtually untraceable. Second, because ANNs fundamentally think and calculate differently than humans, verifying and understanding the processes behind their outputs is complicated - regardless of the skill or knowledge of the human (Sullivan, 2024). This dilemma poses two issues: trust in predictability and a lack of explanation.
An AI system may be efficient and effective in generating a solution to a problem quickly. However, there is a lack of understanding in the actual process of how it arrived at that solution. Therefore, mission owners, operators, and commanders must be weary of trusting its outputs. In critical scenarios, individuals must verify outputs by applying them and theorizing hypothetical outcomes. In addition to trusting the model’s outputs, one must also be able to explain its decisions. Explanation is important within international humanitarian law and the law of war. The legality of state (and sometimes individual) action frequently turns on the truthfulness and rigor of the offered explanation. If an AI model performs a physical action that is controversial or chaotic, it is imperative that the model’s decision-making process be explainable by the user of that program. Without a comprehensive understanding of the underlying mechanisms, achieving this level of transparency becomes challenging. The inability to assess or understand an AI system’s output will hinder soldiers’ ability to trust and manage outcomes. The United States DoD must regulate and rigorously test the use of unpredictable AI systems at every echelon.
Conclusion
In summary, the evolution of the cyber arms race, particularly in the realm of artificial intelligence, necessitates recognizing both the potential and inherent risks of AI as it continues to revolutionize warfare. The United States must not only advance its AI capabilities, but also lead the charge in educating individuals and regulating the use of AI to ensure ethical and safe deployment. The complexities and opacity of AI systems, especially those used in military applications, require a comprehensive understanding and rigorous mathematical proof to prevent unintended and potentially catastrophic consequences. By prioritizing education and transparency, the U.S. can mitigate the risks associated with AI in warfare and foster a more secure and stable national and international environment.
References
Ansari, Junaid Hassan. 2025. “The AI Arms Race: Redefining Global Power and Security.” The Friday Times. April 9, 2025. https://thefridaytimes.com/09-Apr-2025/the-ai-arms-race-redefining-global-power-and-security.
Chen, Meredith. 2025. “South China Morning Post.” South China Morning Post. May 16, 2025. https://www.scmp.com/news/china/military/article/3310707/chinese-team-taps-deepseek-ai-military-battle-simulation.
CISA, DARPA, OUSD R&E, and NSA. 2025. “Closing the Software Understanding Gap.” https://www.cisa.gov/sites/default/files/2025-01/joint-guidance-closing-the-software-understanding-gap-508c.pdf.
Erickson, Jon. 2023. “Killer Bots instead of Killer Robots: Updates to DoD Directive 3000.09 May Create Legal Implications.” Cyber Defense Review 8 (2): 1–13. https://cyberdefensereview.army.mil/Portals/6/Documents/2023_Summer/Erickson_CDR%20V8N2%20Summer%202023.pdf?ver=bIGK4_BcR8UvUwRAz69JUw%3d%3d.
Graham, Tye, and Peter W Singer. 2025. “New Products Show China’s Quest to Automate Battle.” Defense One. March 2, 2025. https://www.defenseone.com/threats/2025/03/new-products-show-chinas-quest-automate-battle/403387/.
Kosinski, Matthew. 2024. “What Is Black Box Artificial Intelligence (AI)?” IBM. October 29, 2024. https://www.ibm.com/think/topics/black-box-ai.
Leese, Bryan. 2023. “Cold War Computer Arms Race.” Www.usmcu.edu. 2023. https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/MCU-Journal/JAMS-vol-14-no-2/Cold-War-Computer-Arms-Race/.
Manson, Katrina. 2024. “AI Warfare Is Already Here.” Bloomberg.com, February 28, 2024. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/.
Meacham, Sam. 2023. “A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous.” Harvard International Review. September 8, 2023. https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/.
Osborne, Charlie. 2019. “DeepLocker: When Malware Turns Artificial Intelligence into a Weapon.” ZDNet. ZDNet. January 19, 2019. https://www.zdnet.com/article/deeplocker-when-malware-turns-artificial-intelligence-into-a-weapon/.
Rajan, Anjana, and Jonathon Ring. 2025. “The AI Arms Race Will Be Won on Mathematical Proof - Defense One.” Defenseone.com. 2025. https://www.defenseone.com/ideas/2025/04/ai-arms-race-will-be-won-mathematical-proof/404834/.
Sullivan, Scott. 2024. “Targeting in the Black Box: The Need to Reprioritize AI Explainability - Lieber Institute West Point.” Lieber Institute West Point. August 28, 2024. https://lieber.westpoint.edu/targeting-black-box-need-reprioritize-ai-explainability/.
Vincent, Brandi. 2024. “Scale AI Unveils ‘Defense Llama’ Large Language Model for National Security Users.” DefenseScoop. November 4, 2024. https://defensescoop.com/2024/11/04/scale-ai-unveils-defense-llama-large-language-model-llm-national-security-users/.
Social Sharing