OP ED, Glitching to Overmatch; Synthetic Training Environment for Close Quarters Combat

By LTC Damon DurallJune 5, 2018

3D Meshing
(Photo Credit: U.S. Army) VIEW ORIGINAL

A fair fight. The showdown at high noon is a well-worn trope of Hollywood westerns; a white hatted good guy and a bad guy wearing a black one meet in single combat. Building ever more drama, tension builds to a nearly unbearable level, and the outcome of the duel seems uncertain. Steely eyed warriors, facing off in front of a saloon in an old west town, survival hangs solely on who possesses the fastest reflexes, and best aim. Normally our movie hero's lightning fast draw shatters the silence, his aim is true, and the bad guy slumps to the ground in yet another good guy triumph. Perhaps he wins because he dedicated himself to endless hours of quick draw practice or simply possess the natural gift of superior hand-eye coordination. Maybe it is because he is the "good guy," and the fair fight provides some measure of moral satisfaction. That's the movies though, and real combat is no place for building drama-or fairness. Fights that really count require overmatch in every dimension of finding, fixing, and finishing the enemy. Relying solely on a fast hand or natural talent just isn't enough: it is always best to alleviate uncertain outcomes.

Shameless cheating. After dinner last night, my family and I were relaxing in the living room when our daughter, who is away at college, facetimed our sixteen year old son. Almost as quickly as her face appeared on the screen, she blurted out, "Did you get my snap?" As it turns out, the humble gal that she is, she had sent our son a Snapchat message containing a screenshot of her most recent Fortnite win. For those without teenagers and don't know. Fortnite is a massively popular and tremendously difficult streaming video game where 100 players from around the world are dropped into a randomly-generated virtual island and the last one to survive a Hunger Games, last man standing styled combat scenario wins. Why this matters is not that she won, but that she won because she exploited an all too common "out of bounds" video game graphics glitch enabling her to hide in the wall of a building and remain undetected while picking off the other players one by one. Normally glitching is a bad thing in the video game world in that it is a failure of the video game designers to accurately replicate the physics of the real world when re-creating it synthetically. As you can imagine, this often drives poor game reviews as it diminishes the fairness of gameplay. In this case, my daughter was able to win because a random glitch allowed her to avoid a law of real world physics, providing her an unfair edge that enabled her to find, fix, and finish the other players. Evidently, winning a Fortnite round, even while cheating, is a really big deal.

Lost in translation. About a half dozen years ago, while assigned to teach as a rotating military faculty member at the United States Military Academy, I learned that my summers would be consumed by Cadet Summer Training. During my first summer there, I was on a team responsible for cadet land navigation. Although that seems like a pretty straight forward task, like all things military and academic, it was not. At this time, Army senior leaders were expressing a great deal of concern about land navigation failure rates for new Soldiers across the Army, including our very own USMA graduates. To address this problem, an outside scientific study was commissioned to better understand how cognitive learning styles and teaching approaches impacted land navigation learning outcomes. Our committee was charged with facilitating this study by assessing cadet learning styles, developing different teaching approaches, and then providing control and experimentation groups for assessment while conducting the training. The study revealed that the issue centered on the cognitive processes involved in the translation of the map, a two dimensional symbolic representation of the terrain, to the actual point on the ground the cadet found him or herself. Cadets who were spatial learners (learn via images rather than words) were unsurprisingly better able to make the cognitive leap from a symbolic representation to reality. But even the spatial learners can face significant challenges. Our brains are wonderfully evolved to conduct sophisticated physics calculations in real time and space. With practice a quarterback can throw a football and hit a fast moving wide receiver, much in the same way a cougar can leap from a tree and take down an unsuspecting deer. Making those same types of calculations when translating from a symbolic representation to the real world, requires more difficult mental gymnastics. Even the translation of a live video feed can create enough cognitive disjunction to be seriously disorienting, as the sensor does not usually provide the same perspective as the person physically moving through space. You can relate to this disorientation if you have ever used driving navigation devices and have experienced the frustration trying to figure out where the route begins in relationship to the direction your car is pointed in a parking lot. It is the old land navigation equivalent of turning a map different ways until you can figure out its correct orientation to your cardinal direction.

Finding a latte. Fairly recent breakthroughs in Augmented Reality have produced wearable devices that place synthetic graphics, models, and symbols onto the real world. Until recently, those synthetic bits were mapped from the outside and downloaded into the visual field of the device user. For example, commercially available AR glasses, via GPS location technology knows where the nearest Starbucks is located in relation to where the device user is located and can place waypoints and indicators leading an urban hipster to his next refreshing latte. This relies on preexisting mapping, placing the device user, the real world, and the applied symbology in the correct physical relationship to each other. No longer is the AR glass wearing latte seeker required to look at a paper or digital map representation and translate that to his actual location. Cutting edge advancements in wearable Mixed Reality devices have moved the ball even further down the field by mapping the world from the inside out. The wearable device with enhanced local computing power have outwardly pointing sensors mapping the field of view in real time. With this type of mapping, a 3D mesh is created upon which a virtual wrap can be applied. Unlike popular Virtual Reality goggles that only create a virtual world and leave the user stumbling around the real one, lenses of these Mixed Reality devices are transparent, allowing the user to see correlated virtual and real worlds at the same time. This is important because Mixed Reality 3D meshing enables first and third person importation of properly oriented simulated data for the device user. A virtual environment can be stitched onto the real one enabling the user to maintain orientation between where he is at spatially and imported overlaid data and computer generated imagery. For example, simulated enemy forces, helicopters, and entire environments could be imported to populate and enhance a live training event. Things on the reverse side of a physical barrier or wall, much more important than our hipster's latte, may also be observed.

Glitching to Overmatch. Imagine our high noon showdown again, but this time the good guy has the advantage of Mixed Reality enhanced vision, amounting to a Fortnite type glitch for the real world. With real-time 3D meshing the good guy can see through walls, maintain proper orientation, and see what the street looks like before ever walking onto it. Coupled with a remote sensor, the good guy would also be aware of where the bad guy is standing. As a result of this sensing overmatch, he would already have the bad guy in his sight picture before exiting the saloon door. His heightened sensory power would enable a lightening quick shot; a certain end to a very unfair fight. But let's be honest, fair fights are only satisfying in the movies. The opportunity Mixed Reality environments enable for military training, mission planning, mission, rehearsal, and mission execution are only limited by our imaginations. These devices powered by the Synthetic Training Environment (STE) Training Simulation Software, and One World Terrain will revolutionize how we visualize the battlefield, turning it into three dimensional representations with which we can interact and display data from the Soldier level to the ASCC headquarters. More importantly, they hold the promise to hone cognition in ways previously unimagined, enhancing decision making and sensing skills in training and combat.