Graphic created using Adobe Firefly AI
The emergence of generative artificial intelligence (AI) indicates a paradigm shift in military research and application, echoing the revolutionary scientific framework presented by Thomas Kuhn in his ground-breaking The Structure of Scientific Revolutions.1 This article delves into the profound implications and transformative potential of generative AI within the military sector, exploring its role as both a disruptive innovation and a catalyst for strategic advancement.2 In the evolving landscape of military technology, generative AI stands as a pivotal development, reshaping traditional methodologies and introducing new dimensions in strategy and tactics. Its ability to process vast amounts of data, generate predictive models, and aid in decision-making processes not only enhances operational efficiency but also presents unique challenges in terms of ethical deployment and integration into established military structures.
This article navigates through the complex terrain of generative AI in military settings, examining its impact on policymaking, strategy formulation, and the broader implications on the principles of warfare. As we stand at the cusp of this technological revolution, this article underscores the need for a balanced approach that harmonizes technological prowess with ethical considerations, strategic foresight, and a deep understanding of the evolving nature of global security dynamics. We aim to provide a comprehensive overview of generative AI’s role in shaping the future of military strategy and its potential to redefine the contours of modern warfare.3
Definition of Generative Artificial Intelligence
Generative AI has become a focal point in modern culture with the popularization of applications such as ChatGPT, Dall-E, and Midjourney. Both industry and academia have adopted its use in various innovative ways, adapting it to suit specific cases. Its computational nature streamlines the search for code syntax and helps create computer programs. Within the humanities, it can easily be used to generate written summaries on nuanced topics. Some applications can create images and even music. As an innovation, generative AI has “democratized access to Large Language Models” trained on the open-source internet; it specializes in producing “high quality, human-like material” for wide audiences.4 Before expanding upon the complex consequences of generative AI’s growing popularity, the terminology must be defined. Generative AI refers to models that produce more than just forecasts, data, or statistics. Its models are used for “developing fresh, human-like material that can be engaged with and consumed.”5
Generative AI is not a specific machine learning model but, rather, a collection of different types of models within data science. The most important differentiation is the output, which mimics the creativity and labor of human capital. Over these last couple of years, we have been lucky enough to experience one of the rare moments in time classified as a scientific revolution while society began adapting to the changes associated with generative AI in industry.
Military Applications
In August 2023, the U.S. military announced “the establishment of a Generative Artificial Intelligence task force, an initiative that reflects the Department of Defense’s [DoD’s] commitment to harnessing the power of artificial intelligence in a responsible and strategic manner.”6 Task Force Lima, led by the Chief Digital and Artificial Intelligence Office (CDAO), has been tasked to assess and synchronize the use of AI across the DoD to safeguard national security. Current concerns about the management of training data sets are the primary focus. In time, DoD aims to employ generative AI “to enhance its operations in areas such as warfighting, business affairs, health, readiness, and policy.”7 Due to the nature of military operations, the DoD has released risk mitigation guidance to ensure that responsible statistical practices are combined with quality data to produce insightful analytics and metrics.8 For any military application, officials must consider the principals of “governability, reliability, equity, accountability, traceability, privacy, lawfulness, empathy, and autonomy” to establish ethical implementation during this transitive period.9
Prospective applications of generative AI include “Intelligent Decision Support Systems (IDSSs) and Aided Target Recognition (AiTC), which assist in decision-making, target recognition, and casualty care in the field;” each of these aims to reduce the mental load of operators and increase the accuracy of decisions in dangerous environments.10 Historically, the U.S. military has implemented AI in “autonomous drone weapons/intelligent cruise missiles” and witnessed “robust results and reliable outcomes in complex and high-risk environments.”11 Although the AI in those weapon systems does not necessarily rely on generative AI models, it showcases a promising ability to follow the foundational ethical principals in American governance. Figure 1 illustrates DoD’s process of adopting AI into new warrior tasks. This system will replace previous practices to cultivate an improved data driven military.12
Figure 1 — Military Adaptation Process of Generative AI
Futuristic applications of generative AI include the planning of routes, writing of operation orders, and formulating of memorandums. Furthermore, the defense industry has been working on “3D Generative Adversarial Networks” capable of “analyzing and constructing 3D objects.”13 These models “become an increasingly important area to consider for the automation of design processes in the manufacturing and defense industry.”14 As the role of creating military goods changes over time, leaders must shift their focus towards thinking deeper about problems and less about the labor process. They will need to develop critical-thinking skills that allow them to understand generative AI outputs based on data inputs to avoid ethical concerns that stem from statistical practices. Many companies in the United States have already faced ethical dilemmas resulting from statistical models, to include fatal crashes from self-driving cars to malpractice lawsuits in hiring techniques.15 Current generative AI models may not be trained on military data sets or have a poor understanding of nuanced military policy. This does not necessarily mean military personnel must refrain from using these platforms, but there is a social burden to take appropriate precautions. The recent breakthroughs of generative AI in the public market will gradually reach a point where it can be used for military applications; however, it must first address:
…1) high risks means that military AI-systems need to be transparent to gain decision maker trust and to facilitate risk analysis; this is a challenge since many AI-techniques are black boxes that lack sufficient transparency, 2) military AI-systems need to be robust and reliable; this is a challenge since it has been shown that AI-techniques may be vulnerable to imperceptible manipulations of input data even without any knowledge about the AI-technique that is used, and 3) many AI-techniques are based on machine learning that requires large amounts of training data; this is challenge since there is often a lack of sufficient data in military applications.16
The next era of military leaders must be aware of their new burden, and in time, officer education systems will shift to reflect these emerging roles.
Generative Artificial Intelligence as a Disruptive Innovation
Generative AI can be classified as a disruptive innovation in accordance with the framework presented in Clayton Christensen’s The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. In his book, Christensen explains why great companies in established markets fail over time. The United States is the leading firm within the market of military power. Although this market is not monetarily based, every market experiences two types of technological change: sustaining and disruptive. Sustaining technology supports current market structures and is led by established firms seeking to satisfy current customers’ needs. Disruptive technology, however, disrupts/redefines markets’ preferences by finding strengths in historically undeveloped characteristics. It was in this aspect, the process of changing market dichotomies that “consistently resulted in the failure of leading firms.”17 Established firms seek to develop new technology that appeals to their current market based on the existing value system.
History has witnessed the fluidity of battlefield technology (for example, the development of bows, rifles, machine guns, and tanks). Each of these advancements restructured warfare and, in some cases, upset the entire world order. For instance, take the fall of Russia in the 19th century during the Industrial Revolution. At the time Russia was a regional power, but it failed to industrialize as quickly as Germany and was unable to organize a strong military industry by World War 1. Ultimately, the failure to innovate led to heavy Russian losses on the eastern front to a technically superior, but much smaller, German army.18 Military value systems reflect what wins on the battlefield. Typically, leaders in established firms/countries overvalue historical approaches and fail to realize the potential of entrants (competing countries developing disruptive technology) in niche warfighting tasks until disruptive technology has advanced too far. Once disruptive technology redefines military value systems and operating procedures, it is too late for sustaining countries to catch up, and they are surpassed on the global stage.
Disruptive technology is dangerous to established firms because there is “considerable upward mobility into other networks” while the market “is restraining downward.”19 The essential idea here is that disruptive technology starts off marketing itself to customers with limited resources yet grows until it can steal bigger contracts. Large firms’ managers often have a difficult time justifying “a cogent case for entering small, poorly defined lower end markets that offer only lower profitability.”20 Within warfare, this is due to superpowers’ need to focus on the upmarket value networks, or rather, the connections/transactions between their territories and the current largest threats to national security. Imagine the President of the United States asking Congress in the mid-2010s to invest heavily in developing generative AI, a product that had no predictable application, rather than focusing on the war in Afghanistan. In hindsight, it would have been a great way to increase the American lead in military power, but until the Russo-Ukrainian War in 2022, perhaps no one could have envisioned the impact of AI in producing kill chains (the concept of identifying targets, dispatching forces, attacking, and destroying said targets). This war has served as a great innovator, notably for autonomous drones that can use satellite imagery and image recognition software to identify hostiles.21 These drones communicate with larger servers and drop explosives on the targets, vastly accelerating kill chains compared to historical operating procedures that required gathering intelligence, deploying forces, and warfighting.22 The Chinese Communist Party has heavily invested in AI capabilities and aims to be the world leader by the mid-2030s, exemplifying America’s newfound military competition due to this disruptive technology.
While disruptive entrants take technology as a given and operating procedures as variable, sustainers see the opposite with operating procedures as fixed and technology as variable. In order to maintain success, military countries abandon niche practices and focus on maintaining the status quo. Rational managers in established countries do not have the luxury or need for risk. In time, the fluctuations of warfare create a cycle as countries uproot power structures, establish governance systems, and are eventually usurped by innovative conquerors. The key to remaining upmarket — a successful superpower — requires established countries to adopt practices to manage disruptive change. Large militaries will experience difficulty field testing emerging technology, so it is a good practice to establish external research teams. These smaller organizations will not expect great results; their key task must instead be to find organizational knowledge to build projects upon. It is impossible to predict the fluidity of warfare, so militaries must actively stay on guard.
The establishment of Task Force Lima is a key example of the United States managing the disruptive nature of generative AI within the military market.23 Christensen recommends three main strategies for established firms to overcome disruptive change. One such strategy would be pouring resources into new markets to make them more profitable, essentially affecting growth rates of emerging markets. Companies may instead elect to wait until the emerging market is already defined and intervene as soon as an opportunity presents itself. Lastly, to handle disruptive change, some companies may place all responsibility on commercializing disruptive technologies in small, outside organizations.24 DoD has been forced to utilize the latter option. A failure to manage AI within the military domain would result in a similar decline in power as Russia faced in the 19th century. The American military seeks to create new capabilities for utilizing small teams outside of existing processes and values to lead innovation, avoid security crises, and withstand warfare changes.
Generative AI in Military Strategy
In the context of military policy and warfighting, the rise of generative AI significantly impacts the strategic and operational frameworks of defense organizations. The integration of this technology into military applications necessitates a nuanced approach to policymaking, blending scientific understanding with ethical and strategic insights from the humanities. C.P. Snow, renowned author of The Two Cultures, aimed to explain the historical divide between humanitarian and natural science studies in British society.
He stated that prior to the Industrial Revolution the societal elite historically educated their youth through reading and writing to teach them the ways of governance, mostly through the subjects of philosophy, law, and English.25 The Industrial Revolution introduced another domain of study — applied sciences — that gave the lower and middle class a new route to improve their own lives through the harnessing of the natural world. Snow’s general idea was that most humans sought to improve their condition through the Industrial Revolution, which finally afforded the study of sciences to be applied to everyday life. Over time they increased their studies to benefit industrialization, while the elite remained focused on matters of literature and governance. The lasting split in academia between the two cultures was exasperated in government through its lack of communication with industry.
The application of generative AI in military contexts, such as autonomous weapon systems and decision support tools, requires policies that balance technological capabilities with ethical considerations, including international humanitarian law and the rules of engagement. Governing bodies in America and internationally, such as the United Nations, have found it difficult to regulate advanced cyber operations. Now, with the introduction of advanced statistical models, it is imperative that decision makers understand the implications of using them and the impacts within society based off the models and training data used. Generative AI introduces new dimensions in warfighting tactics, from automated target recognition to intelligence analysis. Military strategies must evolve to incorporate these AI-driven capabilities while considering their implications on battlefield ethics and soldier safety. Failed recognition could result in civilian casualties and infrastructure destruction if not properly managed. The integration of AI in military operations necessitates reforms in military education and training. This includes incorporating interdisciplinary studies that blend technology with ethics, philosophy, and military strategy, thus preparing Soldiers and commanders for AI-augmented warfare. The U.S. Army is pivoting towards merging the two cultures by cultivating data-competent leaders who won’t have to rely on analysts to garner insights.26
The primary challenge lies in integrating AI capabilities into existing military structures and operations. This requires not only technological adaptation but also doctrinal and strategic shifts. Perhaps the worst thing that could happen is the widening of the cultural gap, as technologists flee to industry and away from government roles. If integrated well into operations, the use of AI offers opportunities for enhanced operational capabilities, such as improved situational awareness, faster decision-making, and more accurate targeting, contributing to the overall effectiveness of military operations. Generative AI redefines the character of warfare and security, posing new questions about the nature of conflict, the role of human soldiers, and the future of international security dynamics. Failure to legislate and implement AI in a timely manner will certainly result in the abuse of highly lethal AI kill chain systems by hostiles unbounded by ethical considerations.
The integration of generative AI into military policy and warfighting presents both challenges and opportunities. It necessitates a new paradigm in military strategy and policymaking, one that harmonizes the advancements in AI with the ethical, strategic, and human aspects of warfare. As military organizations adapt to this AI-driven landscape, the collaboration between technical experts and strategists becomes crucial in shaping effective, ethical, and sustainable military policies and practices.
Conclusion
Generative AI is a disruptive innovation that will completely restructure the military industry. In real time, we are experiencing one of the greatest scientific revolutions in the history of mankind. If you are not convinced, in order to illustrate the astonishing advancements of generative AI, go back and reread the introduction: It was written by ChatGPT 4 after training it on this article, which took approximately 30 seconds. This type of technology was unimaginable only a few years ago, just like the incredibly lethal kill chains in Ukraine. Within the next five years, untraceable amounts of extraordinary science will continue to occur until both military and industry have compartmentalized generative AI’s capabilities. Until then, policymakers must continue to exercise caution while implementing AI in warfare and communicate across the cultural gap with scientists who can explain the inner workings of these complex models. The world may be in the midst of great ambiguity as we hold our breath to see what great weapons will emerge from this unprecedented revolution, but at least one thing is certain, by the end of this the world will surely be changed forever.
Notes
1 Thomas S. Kuhn, The Structure of Scientific Revolutions, 4th ed. (Chicago: The University of Chicago Press, 2012).
2 Disruptive innovation is outlined in Clayton M. Christensen’s The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (Boston: Harvard Business Review Press, 2013).
3 ChatGPT 4 was used to manufacture the introduction as well as various subsections of the article to synthesize sentences that were edited and then implemented. The graphic on page 60 was created using Adobe Firefly.
4 Francisco Garcia-Penalvo and Andrea Vazquez-Ingelmo, “What Do We Mean by GenAI? A Systematic Mapping of the Evolution, Trends, and Techniques involved in Generative AI,” International Journal of Interactive Multimedia and Artificial Intelligence (December 2023), https://www.ijimai.org/journal/sites/default/files/2023-07/ip2023_07_006.pdf.
5 Ibid.
6 Department of Defense, “DoD Announces Establishment of Generative AI Task Force,” 10 August 2023, https://www.defense.gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/.
7 Ibid.
8 Department of Defense, “Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy,” 27 June 2023, https://media.defense.gov/2023/nov/02/2003333300/-1/-1/1/dod_data_analytics_ai_adoption_strategy.pdf.
9 David Oniani, Jordan Hilsman, Yifab Peng, Ronald K. Poropatich, Jeremy C. Pamplin, Gary L. Legault, and Yanshan Wang, “Adopting and Expanding Ethical Principles for Generative Artificial Intelligence from Military to Healthcare,” npj Digital Medicine 6/11 (December 2023): 1-10.
10 Ibid.
11 Ibid.
12 DoD, “Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy.”
13 Michael Arenander, “Technology Acceptance for AI Implementations: A Case Study in the Defense Industry about 3D Generative Models,” (Master of Science thesis, KTH Royal Institute of Technology, 2023).
14 Ibid.
15 Daniel Wu, “A Self-Driving Uber Killed a Woman. The Backup Driver Has Pleaded Guilty,” Washington Post, 31 July 2023, https://www.washingtonpost.com/nation/2023/07/31/uber-self-driving-death-guilty/; Jeffrey Dastin, “Insight – Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, 10 October 2018, https://www.reuters.com/article/idUSKCN1MK0AG/.
16 Dr. Peter Svenmarck, Dr. Linus Luotsinen, Dr. Mattias Nilsson, and Dr. Johan Schubert, “Possibilities and Challenges for Artificial Intelligence in Military Applications,” Swedish Defence Research Agency, Stockholm, Sweden, 2023, https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-160/MP-IST-160-S1-5.pdf.
17 Christensen, The Innovator’s Dilemma, 24.
18 Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (New York: W.W. Norton & Company, 2023).
19 Ibid., 24.
20 Ibid., 72.
21 Scharre, Four Battlegrounds.
22 Ibid.
23 DoD, “DoD Announces Establishment of Generative AI Task Force.”
24 Christensen, The Innovator’s Dilemma, 107.
25 C.P. Snow, The Two Cultures (Cambridge: Cambridge University Press, 2012).
26 Erik Davis, “The Need to Training Data-Literate U.S. Army Commanders,” War on the Rocks, 17 October 2023, https://warontherocks.com/2023/10/the-need-to-train-data-literate-u-s-army-commanders/.
2LT Andrew P. Barlow is currently a student in the Infantry Basic Officer Leader Course at Fort Benning, GA. He graduated from the U.S. Military Academy (USMA) at West Point, NY, with a double major in operations research and economics.
Cadet Allison Bender is currently attending USMA (Class of 2026) and majoring in operations research.
This article appears in the Summer 2025 issue of Infantry. Read more articles from the professional bulletin of the U.S. Army Infantry at https://www.benning.army.mil/Infantry/Magazine/ or https://www.lineofdeparture.army.mil/Journals/Infantry/.
As with all Infantry articles, the views herein are those of the authors and not necessarily those of the Department of Defense or any element of it.
Social Sharing