The Department of Defense officially adopted a set of ethical principles concerning the military's use of artificial intelligence, Monday, Feb. 24, 2020. The principles act as a guide for all services to follow as they design, develop, and employ AI processes and capabilities.According to the principles, the use of AI in the military must be responsible, equitable, traceable, reliable, and governable. Army Futures Command's Artificial Intelligence Task Force (AITF) is charged with understanding how to best incorporate AI into the Army's modernization enterprise."The Artificial Intelligence Task Force and Army Futures Command whole-heartedly agree with and are committed to adhering to the ethical principles recently adopted by the DOD," said Brig. Gen. Matthew Easley, AI Task Force director.Each of these areas of focus will help ensure that future use of AI is done at the highest standards adopted by many government, industry, and academic partners.The principles were adopted on the recommendation of the Defense Innovation Board, and come as the Army looks to use AI as an enabling technology in all of its modernization priorities."I think the principles are a positive step towards meeting the complexity of implementing AI," said Dr. Stephen Russell, Information Sciences Division Chief at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory, and Director of the Lab's Internet of Battlefield Things Collaborative Research Alliance."Like most high-level principles, they can be subjective and contextual in practice," Russell said. "For example, responsibility often requires a qualitative assessment and thus would require related processes with quantitative controls that may be challenging to define."Easley said these principles will help influence and inform the moral and responsible use of AI in the full spectrum of operations within the Army."Our nation's laws and values must always be taken into consideration when adopting and investing in the design, development, and deployment of AI technologies within the Army," he said.As the Army works to make decisions faster and cost efficient, or buy more time for decisions, AI is a proven tool for commanders to harness. Working under these ethical principles ensures that a framework for decision-making is universally applied.Artificial intelligence projects already exist in nearly every cross-functional team in Army Futures Command, but Easley said the Army is looking to take it a step further."Projects aren't enough - we need to do more than just a couple AI projects," he said. "We need to build infrastructure so we can help the rest of the Army do its own AI projects."This infrastructure will be a major step forward in solving what experts call the "Input-Output Problem" - meaning that the Army will be better equipped to process the volume of data it receives and turn it into actionable information. The Defense Science Board found that "given the limitations of human abilities to rapidly process the vast amounts of data available today, autonomous systems are now required to find trends and analyze patterns."This means that AI is all but a necessity when it comes to translating raw data into the critical information needed for commanders to make informed decisions on the battlefield."We want to be able to learn from that data and be able to automate decisions based on that data," Easley said during a recent interview with the Association of the United States Army."A lot of our data-sharing is being done with a human in the loop - and rightly so," he said. "But we want machines to look at a potential battlefield and identify targets to a human decision-maker faster."Russell said that while the application of the principles will require more thoughtful practices, they are non-negotiable."Having guiding constructs is a necessary requirement in the context of AI," he said. "Towards applying the principles in practice, it will be important to have interdisciplinary, subject matter expertise engaged in the implementation."Easley said that these ethical principles won't detract from the Army's mission, but safeguards need to be incorporated from the ground floor - including in the use of concepts such as scaled-autonomy."When we're going into an engagement and we know that the chance of civilians in the area are slim - for example - we can use more of an autonomous system," he said. "But when that guarantee isn't there, we'll have a much less autonomous system, more Soldier-in-the-loop.""Understanding how to make these decisions faster and better - with respect to the principles - is a key part."Related Links:THE INPUT-OUTPUT PROBLEM: Managing the Military's Big Data in the Age of AIDOD Adopts Ethical Principles for Artificial IntelligencePODCAST: Thought Leaders: Artificial Intelligence and its Implications for the U.S. Army