ABERDEEN PROVING GROUND, Md. -- Analysts at the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Analysis Center, or DAC, have completed two studies revolving around the human element of lay error during target acquisition. Together with University of Georgia, or UGA, DAC completed virtual experiments where test participants were required to fire on a variety of targets in an effort to establish a baseline for human lay error.

DAC is one of DEVCOM’s eight science and technology centers. As the U.S. Army’s largest in-house analytical foundation, DAC delivers objective analysis, experimentation and performance characterization to ensure readiness and inform modernization decisions. Through DAC’s analytical prowess, scientists are able to determine the next steps towards lay error minimization, and further implement targeting aides like AI into the future operating environment.

When targeting an adversary’s asset, be it a tank or command structure, operators are told to line up their reticles to the center of their targets. The distance off center at the time of firing is referred to as lay error and can make the difference between taking the target out of combat or a miss. Jennifer Forsythe, an operations research analyst at DAC, published two studies to determine human baselines for lay errors – and subsequently what figures AI need to pass in order to supersede human target decisions.

“Classic target recognition is detect, recognize and identify,” Forsythe said. “Once you identify, the operator would then simultaneously lay and laze the target to find the range. Once you do all three, the vehicle commander decides if they would like to fire. Our question is, when we pull the trigger, will the reticle be overlayed directly on center? We need to determine the human limits of said targeting capabilities.”

To determine the human baseline, the team at DAC partnered with UGA in a multi-year objective using a video game-like interface. Participants were given a controller with simple inputs and were tasked with laying their tank’s reticle over the direct center of targets at various ranges, then pressing “fire.” Targets ranged from simple shapes to the more complex and realistic geometry of an enemy tank. Forsythe says that with complex shapes, it would be more difficult for a user to find out what “true center” is versus what true center is perceived to the user.

The Unreal Engine was used to simulate multiple target types and ranges for untrained participants to fire on, including realistic models of tanks. This added a new layer of complexity as the area to
The Unreal Engine was used to simulate multiple target types and ranges for untrained participants to fire on, including realistic models of tanks. This added a new layer of complexity as the area to miss/hit a target becomes irregular, as indicated in the bottom right graph (green being a hit, red meaning a miss). (Photo Credit: U.S. Army) VIEW ORIGINAL
“Our 15 un-trained participants, as part of the emerging results to a 50-participant study, fired over 3,600 rounds at four distances. We then graphed the errors for both x and y axes,” Forsythe said. “The whole point of our study was not to refine input...it is to find out how accurate you are.”

When asked what makes a lay error a determining factor on the battlefield that is worth researching, Forsythe stated that it’s effective in quantifying an issue at hand for all kinds of targeting, not just ground vehicles. On top of that, the nature of lay errors tends to get perplexing the more factors you tie into the combat scenario.

Future studies include how differing variables had an impact on compromising a user’s lay error. Environmental aspects such as rain, fog or even partially obfuscating targets – factors that historically gave AI trouble – are all scheduled for testing of lay error accuracy by the human participants.

“AI can have trouble identifying objects they are not trained to identify,” Forsythe said. “Part of the future work will be on percentage of obstruction and contrast values in a very scientific manner to quantify human capabilities.”

Forsythe presented the collaborative group’s findings at the Ground Vehicle Systems Engineering Technology Symposium, or GVSETS, & Modernization Update in August to share their technical research to approximately 1,800 attendees. The group will also be presenting at the Interservice/Industry Training, Simulation and Education Conference, or I/ITSEC, known as one of the world's largest modeling, simulation and training events.