Army researchers take innovative approach to cybersecurity

By U.S. Army CCDC Army Research Laboratory Public AffairsOctober 24, 2019

usa image
Army researchers are taking an innovative approach to tackling cyber attacks that will assist Soldiers in more effectively protecting information in resource-constrained environments, and will allow them to have greater confidence that their analysis... (Photo Credit: U.S. Army) VIEW ORIGINAL

ADELPHI, Md. -- Army researchers are taking an innovative approach to cybersecurity that will assist Soldiers in more effectively protecting information in resource-constrained environments.

The15th International Workshop on Mining and Learning with Graphs, an event focused on developing techniques that effectively mine and learn from networked data, recently recognized Army researchers with a best paper award.

The research is part of the detection research area in the U.S. Army Combat Capabilities Development Command's Army Research Laboratory's Cybersecurity Collaborative Research Alliance. Researchers are studying how to make machine learning algorithms more robust to adversarial attacks in resource-constrained environments.

"The information one can draw from these systems, i.e., computers or other devices connected by some network, is typically multi-modal, multi-relational and dynamic," said CCDC ARL researcher Dr. Kevin Chan.

The paper, "Improving Robustness to Attacks Against Vertex Classification," provides insights into four questions:

• How does decoupling structure from attributes affect robustness of the classifier?

• How does selection of the training data affect robustness of the classifier?

• Is there an inherent tradeoff between the classifier's performance and its robustness?

• Does triangle distribution give the adversary away?

According to Chan, diving into and providing insights on the aforementioned questions makes this research different than a significant amount of adversarial machine learning work.

"The majority of adversarial machine learning work focuses on misclassifying images, i.e., leading them to "see" things which aren't there," Chan said. "This research is different in that it is bringing novel network science and machine learning concepts into cybersecurity research."

The paper proposes an approach to improving robustness to attacks on networks through two main contributions.

First, a within-network classification model based on using a traditional machine learning approach, support vector machines, by using features of the nodes, including node's attributes and the connectivity structure.

Second, two methods for selecting training data that aim to ensure that nodes in the test set have connections to nodes in the training set, providing a robust structure on which to base predictions.

"A common task in networks or graphs is to classify the nodes that make up the network," he said. "This can be used to determine if nodes have been compromised, for example by a virus or being taken over by an adversary, or are faulty such as having a broken sensor. This leads to the capability of cyber-monitoring systems to make friend or foe judgements."

Chan stated that applications use features of the individual nodes and perhaps features of the neighboring nodes (and who are its neighbors) to make such determinations.

"Adversaries are particularly interested in corrupting training data or manipulating the networks to break or bias the classification process in these networks," Chan said. "Developing algorithms or methods to provide robust techniques towards node, also known as vertex, classification in networks is very important in cybersecurity and other domains."

In the datasets on which the original attack was demonstrated, Chan said, these approaches show that changing the training set can make the network much harder to attack.

"Findings show that an adversary must alter two to four times more links of the network for a successful attack as compared with the baseline of random alterations," Chan said.

Chan said that this research is able to provide increased robustness to operations that employ machine learning to classify certain entities such as information, people and images, and there is an adversarial presence attempting to thwart the performance of this machine learning.

"This will allow Soldiers to have greater confidence that their analysis has been corrupted or infiltrated," Chan said. "Current approaches are very fragile, they can be broken easily, and robustness to attacks is not well known. This paper shows that a networked approach (understanding the relationships between the entities of interest) may provide such understanding."

Crucial to the success of this project and the Cybersecurity CRA in general, the alliance has brought government, industry and academia together to develop and advance the state of the art of cybersecurity.

For this specific research area, collaboration has been conducted with Benjamin Miller, a doctoral student at Northeastern University's Network Science Institute and a Lincoln Scholar at the Massachusetts Institute of Technology's Lincoln Laboratory, and Professor Tina Eliassi-Rad, his advisor at Northeastern University, who have played a vital role in this project.

"Collaboration with Northeastern University began in 2018 as part of the Cybersecurity CRA," Chan said. "ARL was aware of the university's research activities through the Network Science Society, particularly applying machine learning techniques to network science problems. These approaches are now being applied to our challenges addressed within the Cyber CRA."

According to Chan, artificial intelligence and machine learning are top priorities for the Army, and such approaches are increasingly vital with the emergence of multidomain operations, as adversaries will attack machine learning on multiple operational domains.

"Understanding the robustness of machine learning techniques will be a key capability in a wide range of operating functions for the Army, specifically cyber-operations where Soldiers are expected to need to cope with near-peer adversaries attempting to break our machine learning deployments," Chan said.

Going forward, this research will consider more sophisticated adversarial attacks and also validate these approaches on more military relevant datasets, such as network traffic and intrusion detection systems.

______________________________

The CCDC Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command. As the Army's corporate research laboratory, ARL discovers, innovates and transitions science and technology to ensure dominant strategic land power. Through collaboration across the command's core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more lethal to win our Nation's wars and come home safely. CCDC is a major subordinate command of the U.S. Army Futures Command.

Related Links:

U.S. Army CCDC Army Research Laboratory

U.S. Army Combat Capabilities Development Command

Army Futures Command