Inherent in the coming internet of battlefield things are challenges that commercial products don't face. But those products might have solutions to the Army's problems, which is why ARL and its partners are exploring novel distributed processing approaches, a domain the Army practically invented.

Distributed processing-using multiple computers to run an application-is not a new idea. But as technology advances, opportunities arise for new and novel distributed processing approaches that take advantage of nascent network-based communication, computing systems, innovations in algorithms, and software.

First realized around 1983 at Aberdeen Proving Ground (APG), Maryland, distributed processing has evolved over several decades as information technology has expanded exponentially. It will be a key technology for future Army operations, especially complex Soldier situational awareness.

As computer and network capabilities grew, distributed processing also grew to mean multiple, interconnected processors or computers working together to perform a common calculation or to solve a common problem. The Ballistic Research Laboratory, predecessor to the U.S. Army Research Laboratory (ARL), implemented network communication protocols-now known as internet communication protocols-for communication among four processors.

With each generation of distributed processing, more capable processors are pushed further out into organizations and society with more functionality, greater interaction and improved communication among different tiers of processing with greater integration among them, culminating in the internet of things: the proliferation of processors, mobile devices and sensors that are embedded in the physical objects-appliances, vehicles, buildings and other items-that surround us in our daily lives.


Today, a primary motivator for novel distributed processing is recognition of the enormous potential that resides in both the unused and dedicated processing power of many connected devices and the need to know more, sooner, and to leverage that knowledge to affect immediate future events. In the same way that every webpage you visit serves up advertisements based on browsing habits, the Army needs to be able to do something similar with intelligence, surveillance and reconnaissance systems so that Soldiers get served up what they need for superior situational awareness.

The Army faces directly analogous technical challenges-�Soldiers need to know more and sooner (situational awareness) to allow rapid, decisive action. Now, and even more so in the future, the battlespace is characterized by highly distributed processing, heterogeneous and mobile assets with limited battery life, communications-dominated but restricted network capacity, and operating with time-critical needs in a rapidly changing hostile environment. Capabilities to be developed for the Army for enhancing situational awareness in contested battlefield environments are different from traditional commercial applications, which are targeted at exploiting the consumer. Essentially, the Army needs to be Facebook in reverse-exploiting the data for the use of the consumer, not exploiting the consumer for the use of data.

Distributed processing is one of the essential technologies for maintaining overmatch in the land domain in various operational and contested environments, including cyber and artificial intelligence. Some examples of future operational environments where innovative distributed processing approaches are essential include:

- Real-time situational awareness.
- Distributed machine learning and relearning.
- Distributed intelligence.
- Human-machine teaming.
- Delivery of big data analytics at the right place in a timely manner.
- Operations in megacities.
- Cooperative and collaborative engagements.
- Cyber and electromagnetic engagements.
- Accelerated learning.
- Augmented reality.


Large, expensive computers with interconnected processors were available to a small number of expert users in the 1980s. By the 1990s, the industry had moved away from custom processors to commodity chips, co-processors and shared software. The concurrent growth and proliferation of internet-enabled distributed processing, most notably in applications like SETI@home (the University of California, Berkeley-based Search for Extraterrestrial Intelligence, with 5 million internet-connected devices) and in processors like Rosetta@home (molecular biology, with 1.6 million internet-�connected devices or processors). For these applications, algorithmic innovations took advantage of unused computer time donated by people worldwide.

They also benefited from the asynchronous nature of applications, in which every calculation is independent of every other calculation. These projects showcased more than a billion operations per second to achieve exascale computing. Exascale computing is not achievable by any single supercomputer that exists today.

By the 2000s, the internet brought about service-oriented architectures with seamless web access. Later, hardware virtualization allowed software to emulate an entire computer infrastructure, which culminated in the popularity of hosting pictures and other personal data in the cloud. We refer to the cloud as distributed processing, since it literally is distributed all over the world. But its purpose is to centralize computing infrastructure. It relieves the end-user organizations of having to invest individually. DOD's Distributed Common Ground System is an excellent example of cloud computing at the edge.

As opposed to cloud-�computing, emerging technologies in ad hoc networked mobile devices, the internet of battlefield things, special purpose robotics, unmanned vehicles and social networks will produce enormous amounts of data. It is critical for Army scientists to explore novel distributed processing approaches for Army-specific applications, especially those distributed approaches that have potential to enhance the speed of decision-making.


Distributed processing "at the edge" is a new paradigm in which we see convergence of computer processing with low-power processing, intelligent networking, algorithms and analytics as one entity, as opposed to stovepiped technologies. Distributed processing at the edge-referred to as edge computing, fog nodes, cloudlets, micro data centers and micro-clouds-is simply localized, trusted, resource-rich computers that are connected.

Edge computing requires a lightweight solution using containers for distributed processing. Instead of a physical canister that stores things, these containers optimize computer data by processing it near the source of data. The draw is that containers can be tailored to single solutions, such as a machine-learning container or a video-processing container. Army scientists want to figure out how to harness the benefits of edge computing with containers while navigating the challenges of doing it with mobility, such as intermittent bandwidth, ad hoc networking and policy-based environments.

Emergent computing is another evolving form of distributed processing. Information processing and control emerge through the local interaction of many simple units that exhibit complex behavior when combined. Intelligent software agents are in this arena: sophisticated computer programs that act on behalf of their users to find trends and patterns.

There are also multiagent systems that are loosely coupled networks of intelligent agents that interact to solve problems outside of what any one agent would accomplish.

Neural-inspired computing is fast becoming an option for low-power novel distributed processing. Neural-inspired computing mimics the neurons and synapses of a biological brain. Another characteristic is that communication processing in neurons and synapses uses efficient digital or analog techniques such as two-dimensional (2-D) atom-layered nanotechnologies. An example of 2-D is a crystalline material that has a single layer of atoms with unusual semiconductor and neuromorphic characteristics at the nanoscale.

In addition to continuous innovations in scalable algorithms and software, future computing architectures like quantum networks, data flow computing, and cyber- and electromagnetic-secured heterogeneous processors are going to play a role in overcoming distributed processing shortcomings that surface in military scenarios.


ARL is working toward the capabilities of the next generation of distributed processing, in collaborative projects with academic institutions and industry and in internal programs.

External collaborative programs that address challenges with distributed processing from algorithms and theory include the international technology alliance with the United Kingdom Ministry of Defense, the internet of battlefield things, distributed and collaborative intelligent systems and technology, the U.S. Army High Performance Computing Research Center, the Center for Distributed Quantum Information and ARL's �Single-Investigator Program, executed through the Army Research Office.

There are also internal projects that lay some of the foundation. For example, we work with IBM, Purdue University and the Lawrence Livermore National Laboratory in understanding the programming and use of neuromorphic processors-brain-inspired computing. These neuromorphic processors have proven quite adept at machine-learning tasks, yet consume 1,000 times less power than conventional processors.


The Army has been at the forefront of computing and distributed processing and continues to make investments in related research to shape how the future Army will fight and win. The complexities of distributed processing become more clear as the way in which humans will engage with distributed artificially intelligent systems becomes more defined.

The reliance of intelligent systems on wireless communication and networked processes makes them vulnerable to cyber, physical and electronic attacks. Thus, it is necessary to develop technologies that mitigate those risks and keep systems functional in the face of such attacks. In the current and future world, this requires innovations in distributed processing and computation on and off the battlefield.

For more information, contact the authors at or

DR. RAJU NAMBURU is chief of the Computational Sciences Division at ARL. He has more than 100 publications in various journals and refereed papers in international conferences and symposiums in the areas of computational sciences, computational mechanics, scalable algorithms, network modeling and high-performance computing. He is a Fellow of the American Society of Mechanical Engineers and a member of the U.S. Association for Computational Mechanics. He holds a Ph.D. in mechanical engineering from the University of Minnesota, and received master of engineering and bachelor of engineering degrees in mechanical engineering from Andhra University in India.

DR. MICHAEL BARTON, a senior scientist for Parsons Corp., provides contract support to ARL. He has been at APG since 2001. His entire career has been in physics-based modeling and simulation and high-performance computing. He previously served as a consultant in the aerospace industry; as a contractor supporting the Air Force at Arnold Air Force Base, Tennessee, and NASA in Ohio; and with the Boeing Co. in Seattle. He received his Ph.D. and his B.S. in engineering science and mechanics from the University of Tennessee, Knoxville, and his master of engineering degree in aeronautics and astronautics from the University of Washington.

This article is published in the January - March 2018 issue of Army AL&T magazine.