It is the year 2035, and society has been fully integrated with artificial intelligence (AI) and robotics. Humanoid robots help humans with tasks from personal home care to manufacturing to public service. Society believes them to be fundamentally safe because they must abide by the Three Laws of Robotics, which are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

A simple look through history shows how no law is unbreakable. Once these fundamental robotic laws are broken, then the machines that were designed to protect us and make our lives more efficient will have the ability to turn on us. They already control communication networks, power supplies, medical facilities, and an untold amount of military equipment. How easily society could fall if the technology we rely so heavily on decides to turn on us for control.

Now you might say, “that’s impossible,” and “we have too many fail-safes.” But do we? Our society is fully dependent on technology. It does not have to be the example alluded to above from the 2004 movie, I, Robot. It could be something as simple as computer hackers or a massive power outage that brings us to our knees.

Technological advancements have enabled us to live our everyday lives more efficiently. Within the same century we went from horse and buggy to putting a man on the Moon. More important, computer processing power has developed from needing a computer the size of a building for simple calculations to being able to do highly complex calculations with a tiny microprocessor that can fit on a tip of a finger. The possibilities seem to be endless with our imaginations. We ask our phones to change the temperature of our house or to write us a paper based on some basic inputs. But how far will this go? Are we setting ourselves up for our own destruction? What happens to our humanity?

We have advanced far in our pursuit of efficiency, but I believe we are losing our humanity. We are posturing ourselves to be one step away from a world like I, Robot, or like many of the other universes that people have written about throughout history. They illustrate how our pursuit of efficiency can lead to our downfall. Fiction is only a step or mistake away from turning into reality. My goal is not to create hysteria but to open a conversation about the loss of our humanity.

Understanding the Basics of AI

The concept of AI is not a recent development. Our history is filled with dreams of creating machines to assist with our productivity. In 1726, Jonathan Swift wrote in Gulliver’s Travels about a machine that assisted scholars in generating new ideas. The Three Laws of Robotics mentioned above first appeared in a short story in 1942 by Isaac Asimov titled “Runaround.” However, it was not until 1955 that the term “artificial intelligence” was first used in a workshop proposal titled “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” by John McCarthy, Marvin Minsky, and Claude Shannon. The resulting 1956 Dartmouth workshop is considered to be the beginning of the field of AI.

At its core, AI is the ability for computers and machines to simulate human intellectual functions such as problem-solving, learning, decision making, and comprehension. Within AI, there are multiple subsets that have developed over the years, which include:

  • Machine Learning — When AI systems use historical data to learn without direct instruction from human input.
  • Deep Learning — Machine Learning models that mimic human brain function.
  • Generative AI — Deep Learning models that can create original content.

This technology is developing at such a rapid pace that the different types and levels are ever changing. Companies even depict differences in how they categorize the kinds of AI. I will only highlight three general categories into which some of the others fall:

  • Weak AI — What exists today such as chatbots which are limited to specific actions.
  • Strong AI — AI that is designed to accomplish tasks without human input and can perform at levels like humans. This is still in development.
  • Super AI — While still theoretical, this is the category in which AI surpasses human intelligence and ability. It would become truly human-like in its appearance and disposition.

Our Current AI Situation

With this basic understanding of AI, we see the impacts that it has on our everyday lives. One may think, “well, I don’t use AI,” or “I’ve never used a chatbot.” However, AI is already fully ubiquitous in our lives. Our search engines, music and product recommendations, wearable fitness trackers, security systems, and email servers that categorize our emails are just a few examples. A Pew Research study in 2022, “Public Awareness of Artificial Intelligence in Everyday Activities,” found that half of Americans are aware of the common ways they may interact with AI, such as chatbots and product recommendations but that only three in 10 can identify the other areas mentioned above. For instance, if you need directions to drive somewhere, AI plans your route and monitors the route conditions as you drive.

The Benefits of AI

Before we explore the dangers of AI, it is only fair to analyze its benefits. The advancements in technology have improved our lives and enabled us to achieve more we than ever imagined. I am not here to argue that all technology and the use of AI will have a completely negative impact on our lives. We can always find benefits when these technologies are properly used as tools.

One of the most obvious benefits is our ability to search for information anywhere we have an internet connection, bringing all the information of the world to our fingertips. We no longer need to search through printed books or visit libraries. This saves us immense amounts of time.

In developing new advancements, we can now solve more complex problems with the assistance of AI while eliminating human error. For example, this has enabled us to develop advancements in medicine and engineering. AI can run hundreds of scenarios at once to find the most effective solution to a problem.

We can now automate a variety of tasks in manufacturing and production, greatly increasing our output and productivity. Although this has cost us jobs, the precision of the automation has also reduced manufacturing defects in products. This reduces manufacturing costs, which provide more cost-effective products to consumers.

We see the same advantage in the military with resource management. The current Field Manual 4-0, Sustainment Operations, discusses the new concepts of precision sustainment and predictive logistics. It states, “precision sustainment is the effective delivery of the right capabilities at the point of employment enabling commander’s freedom of action, extending operation reach, and prolonging endurance.” While “predictive logistics is a system of sensors, communications, and applications (data support tools and data visualization) that enables quicker and more accurate sustainment decision making at echelon from tactical to strategic.” These concepts are powered by AI, which has a direct result on the battlefield. Sustainers can now more effectivity plan and support the warfighter.

The Dangers of AI

Now that we have analyzed the benefits of AI, we can consider the dangers and how we risk losing our humanity.

One area of concern is AI’s safety and security. AI, like any other computer program, is a series of codes. Codes can be changed or broken more easily than one can imagine. For instance, if you play video games, you know that a simple update can break the entire game, all because one line or character in the series of code is wrong or misplaced. My undergraduate degree is in computer information studies. I have seen firsthand how coding can be affected in this manner. Once there is a break in the programming, someone must go line by line to find the problem code. Yes, there are computer programs that are designed to do this, but what if those programs have malicious code? Or, what if Strong AI or Super AI chooses not to fix the code? Additionally, all computer programs can be hacked.

From my experience of designing, building, and programming robotics, I have found that they can be hacked or can contain broken code just like AI. They follow the code that is written and fall into the category of Weak AI, for now. However, the most dangerous aspect is their inability to reason. This is what separates them from humans in their current state. They are unable to judge between right and wrong but make every decision based on calculations. In the movie, I, Robot, the main character is saved during a car accident by a robot while a child is left to die because the robot calculated that the adult had the higher chance of survival. As humans, one could argue that many would choose to save the child first. We would be using our reasoning ability. It is only in the theoretical state of Super AI that they could begin to reason, but this would pose even more dangers to our humanity.

AI may also take on the bias of those who created it while it is still in the Weak AI state. As mentioned, AI helps us find information with search engines. However, the information it returns can easily be biased to return only certain information or information that is more favorable to the creator. Conduct the experiment yourself using different search engines and AI programs like Alexa or Siri to see for yourself.

In the military we are developing AI to be in control of more and more systems. While they have provided a benefit for precision sustainment and predictive logistics, what do we do when the power fails? Can we still conduct the mission using only analog systems? We are entrusting our equipment and supplies to driverless vehicles that we already know can be hacked or may not follow the commands they are given.

I have built robots that were designed to follow a pattern or line to a destination. When something interrupts the set path, the system fails. Humans must be in control on the battlefield in all aspects. If we want to remove humans from harm’s way, then the equipment and vehicles must always be controlled by a human. This most certainly includes arming robots powered by AI. Have we not learned from the countless fictional examples such as the Terminator movies what could happen when we use armed machines? Yes, many argue that those stories are just fantasy, and that AI would never do that. However, remember what Super AI could accomplish if it became a reality. Many thought we could never put a man on the Moon, but we did. Look what we have achieved in just the last 100 years.

These examples of the dangers of AI are just the beginning. While more exist, the majority of these can be linked to the ultimate danger of “what happens when we lose control?” The BBC once quoted Stephen Hawking as saying, “The development of full artificial intelligence could spell the end of the human race. … It would take off on its own, and redesign itself at an ever-increasing rate. … Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” We are opening Pandora’s Box, and once it is open, there may be no going back.

What Makes Us Human

I could have used a chatbot to write a viable version of this article within seconds. Instead, I spent hours researching, drafting, writing, and editing it on my own. While the chatbot would have saved me all that time, the article would not be my work or thoughts. Thus, AI inhibits our creativity and what makes us human. Some argue that they use AI and the chatbots to generate ideas or draft emails. However, in doing this, you are hindering your ability to think for yourself, limiting your imagination and reasoning, and worst of all, becoming lazier.

With our heavy reliance on technology, we are becoming lazier than ever before. With all the technology at our fingertips, I argue we are less intelligent now than we were 100 years ago. The National Assessment of Educational Progress, a federal standardized test, has shown a drop in students’ performance in basic math and reading from 2004 to 2024. The largest drop was from 2020 to 2022 during COVID, when most students did school remotely over the internet. Many people under the age of 25 cannot read an analog clock or do simple math in their head. They need a digital clock to tell time and need a calculator to do basic math. This is a step backward for human development, not forward.

We are at a point in history where we are even unable to distinguish between human products and AI products. Technology exists where you can provide inputs into an AI program, and it will produce a podcast that sounds like two human beings having a conversation, complete with humor and emotions. We are literally developing ourselves out of existence.

Can We Save Our Humanity?

We have already started down the path of full AI integration. We might be at the point of no return. Is it too late for us to make a difference? As with everything in life, there must be a balance. There is some good that we cannot ignore when AI and robotics are used as a tool. However, is the efficiency worth the sacrificing of our humanity? We have analyzed the dangers that exist, and we are truly playing God with this technology. I write this to challenge our current way of thinking and to analyze the path we are on. I will leave you with this simple question, “Just because we can, does that mean we should?”

--------------------

CPT Garett H. Pyle is currently the Military Editor-in-Chief for the Army Sustainment Professional Bulletin and has been selected as the first Sustainment Center of Excellence Harding Fellow at Fort Gregg-Adams, Virginia. He joined the Army Reserves in 2012 as an O9R (Simultaneous Membership Program Cadet) where he simultaneously attended ROTC at Washington & Jefferson College, where he commissioned in 2016 in the Transportation Corps. He holds a Master of Arts degree in transportation and logistics management from American Military University. He is an Honor Graduate of both the Transportation Officer Basic Course and the Logistics Captains Career Course.

--------------------

This article was published in the spring 2025 issue of Army Sustainment.

RELATED LINKS

Army Sustainment homepage

The Current issue of Army Sustainment in pdf format

Current Army Sustainment Online Articles

Connect with Army Sustainment on LinkedIn

Connect with Army Sustainment on Facebook

---------------------------------------------------------------------------------------------------------------