Human-autonomy teaming helps Army design trustworthy AI

By U.S. Army CCDC Army Research Laboratory Public AffairsOctober 29, 2020

Dr. Jessie Chen says human factors research can improve how Soldiers interact with intelligent systems
In order to navigate the new and unexplored landscape of human-machine interactions, the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory chooses human-autonomy teaming as one of its top  research priorities.
In order to navigate the new and unexplored landscape of human-machine interactions, the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory chooses human-autonomy teaming as one of its top research priorities. (Photo Credit: Rudi Petry / Shutterstock) VIEW ORIGINAL

ADELPHI, Md. -- For most tools, the outcome of its usage depends almost entirely on the actions of its human wielder. But due to the rise of automation and artificial intelligence, the growing prevalence of smart technology has complicated the once simple relationship between person and machine.

In order to navigate this new and unexplored landscape of human-machine interactions, the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory named the study of human-autonomy teaming as one of its top  research priorities.

“Human-robot interaction and human autonomy teaming have been active research areas at ARL for many years,” said Dr. Yun-Sheng "Jessie" C. Chen, the Army’s senior research scientist for Soldier Performance in Socio-Technical Systems. “For example, one of ARL’s Essential Research Programs is on human autonomy teaming. ARL also has collaborative alliances with academia and industry on HRI and HAT-related efforts.”
Dr. Yun-Sheng "Jessie" C. Chen, the Army’s senior research scientist for Soldier Performance in Socio-Technical Systems, gives a presentation.
Dr. Yun-Sheng "Jessie" C. Chen, the Army’s senior research scientist for Soldier Performance in Socio-Technical Systems, gives a presentation. (Photo Credit: U.S. Army) VIEW ORIGINAL

Born and raised in Taiwan, Chen first became familiar with the U.S. Army through the work of her father, a military doctor who often collaborated with U.S. Army medical personnel.

After college, she moved to the United States and pursued higher education at the University of Michigan. There, she received her master’s degree in communication studies and met her husband, a fellow grad student.

It wasn’t until after she spent a few years at home to raise her children that Chen discovered her passion for the sciences.

“At the time, my husband was a faculty member at the University of Central Florida, and I asked him to bring home a copy of the UCF degree catalog,” Chen said. “A doctoral program called Applied Experimental and Human Factor Psychology caught my attention. I thought human factor psychology sounded really interesting, so I applied.”

After she obtained her doctorate and worked at the U.S. Army Research Institute for two years as a post doctoral fellow, Chen secured a position in 2003 as a research psychologist at the Army Research Laboratory’s Human Research and Engineering Directorate.

During her first visit to Fort Huachuca, Arizona, Michael Barnes, the then-chief of the field element, approached Chen to conduct research for a new Army program on human-robot interaction.

“Mr. Barnes, who retired a couple years ago, was a great mentor to me,” Chen said. “Our collaboration was very fruitful, and we produced several highly cited papers together.”

Like many of her colleagues at the time, Chen recognized the urgent need to establish a new set of design principles for autonomous machines.

Machines that operate under manual control, such as vacuum cleaners, lawn mowers and automobiles, currently dominate most human-machine interactions; however, that may all change in the future as humans become increasingly reliant on AI systems that don’t require direct input from the user.

In order for this transition to occur successfully, Chen explained, the design philosophy of autonomous machines must account for a completely different set of human factor considerations.

“The human becomes a supervisor rather than an operator,” Chen said. “Basically, all she or he needs to do is to pay attention to what’s going on and intervene when necessary.”

According to Chen, the relationship between the user and the automated machine can fall apart with disastrous consequences if experts don’t carefully scrutinize all the human behaviors that may come into play.

As examples, she referenced some of the high-profile incidents associated with the Tesla autopilot system and the Uber autonomous driving program.

“A common issue among those incidents was that those human supervisors stopped paying attention after a while, and when they needed to intervene, it was too late,” Chen said. “So how do you design a human-machine interface so the human can be engaged and maintain proper situation awareness of the environment? This has been an active research area in the human autonomy teaming research community.”

Chen also emphasized the importance of agent transparency, such that the design of the autonomous systems allows the human supervisor to easily understand the AI’s intent and reasoning process behind its behavior as well as its future plans.

In 2014, Chen and her colleagues created the Situation Awareness-based Agent Transparency framework to guide the design of transparent interfaces for human-AI interactions.

The SAT framework, which became the topic of multiple articles in a recent IEEE Transactions on Human-Machine Systems special issue on “Agent and System Transparency,” received international recognition and has already aided HAT research groups not only in the United States but also in Germany, Australia and Israel.

“What excites me most about our work is that our research has impacts on Soldier systems and the effectiveness of Soldier performance,” Chen said. “One of the transitions from ARL to the Next-Generation Combat Vehicle program is a transparent interface design. To see that design being adopted by an Army vehicle program is extremely rewarding.”

According to Chen, human-autonomy interaction research serves a critically vital role in the Army, especially in the context of Multi-Domain Operations as technologies on the battlefield become more sophisticated. Chen gives an extensive interview in the Oct. 29, 2020, What We Learned Today podcast.

Listen to The Science Behind Human-Robot Teaming on the What We Learned Today podcast!
Chen gives an extensive interview in the Oct. 29, 2020, What We Learned Today podcast.
Chen gives an extensive interview in the Oct. 29, 2020, What We Learned Today podcast. (Photo Credit: Rudi Petry / Shutterstock) VIEW ORIGINAL

In order for Soldiers to successfully complete their mission, she explains, the Army need researchers to deliver tools and machines that Soldiers can trust.

“Our research programs, such as the Human Autonomy Teaming Essential Research Program, tap certain aspects of future Soldier systems, but there are so many other aspects that we still need to work on,” Chen said. “This is an exciting time to be a human factors researcher.”
Visit the laboratory's Media Center to discover more Army science and technology stories
(Photo Credit: U.S. Army) VIEW ORIGINAL

CCDC Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command. As the Army’s corporate research laboratory, ARL is operationalizing science to achieve transformational overmatch. Through collaboration across the command’s core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more successful at winning the nation’s wars and come home safely. CCDC is a major subordinate command of the Army Futures Command.