“Disinformation Starts at Home”, AI increases the speed and effectiveness of Information Operations

By U.S. Army Mad Scientist Initiative, Futures and Concepts CenterJuly 16, 2020

(Photo Credit: U.S. Army) VIEW ORIGINAL

JOINT BASE LANGLEY-EUSTIS, Va. –The U.S. Army’s Mad Scientist Initiative is hosting a series of virtual events on Weaponized Information in order to better understand the rapidly evolving information environment and its impact on the Future Operational Environment. On July 15, 2020, the Mad Scientist Initiative and Georgetown University’s Center for Security and Emerging Technology (CSET) hosted a virtual webinar to discuss AI’s impact on the information environment.

The panel, moderated by CSET Senior Fellow retired Lt. Gen. John Bansemer, featured three CSET Research Fellows; Dr. Margarita Konaev, Katerina Sedova, and Tim Hwang, experts on artificial intelligence, information warfare, and international security in both the European and Pacific theaters.

The panelists stressed that it is imperative to not be anchored by what is currently understand of weaponized information. They emphasized that AI and Machine Learning will amplify information campaigns and undermine trust by not only accelerating information operations (IOs), but also by increasing the scope and targeting the nature of the information.

Konaev observed, “The traditional line between who and what is real is beginning to blur, and possibly becoming increasingly irrelevant.” How Military cultures tackle this will impact how countries approach and deploy IOs.  For example, cities produce huge amounts of information, which AI can help process, and this data can be used to develop more precise narrative development and delivery.

Konaev explained that “proxy warfare means that we will witness the proliferation of AI Systems and weapons to more and more actors around the world, which has massive implications for the future of global security.”

The panelists conveyed that AI not only increases the speed of information operations but also increases their effectiveness by augmenting the adversary’s ability to aim messages and align its campaign with the preferences and behaviors of targets.

Panelists explained that AI also helps develop convincing personas for distributing information up to synthesizing realistic images for accounts.  They illustrated that potentially, large amounts of training data could be used to tailor messages to the preferences and behaviors of targets, increasing the resonance of the campaigns. These efforts can be augmented by ‘sentiment analysis’ and natural language processing and will automate campaigns that were previously labor intensive.

A key take away is that “disinformation starts at home.” That is to say, foreign actors mobilize and target domestically generated conspiracies, making attribution convoluted and difficult. Foreign bad actors attempt to exploit existing fissures in society by augmenting polarizing information, these tactics often modify the Russian-developed “Firehose of Falsehood” to exploit natural fissures in a society.

Sedova points out, “The early ‘firehose of falsehood’ model was repetitive and high-volume model of half-true narratives pushed by human operators and trolls/bots onto both sides of a conflict to exploit natural fissures in society COVID has opened more opportunities to exacerbate such divides.” Sedova further elaborates, “Russian influence campaigns were honed first at home, then against their neighbors, before they were used against advanced democracies. Other competitors have taken similar paths.”

The influence of deepfakes which are likely to be deployed in future information campaigns. Although deepfakes are threatening, they are only one type of content. AI generated news and memes are other examples of media that will be weaponized through IOs and are being used at a massive scale. Mr. Hwang explained “A high- quality deep fake can take weeks, and time scales for the type of media (i.e., pictures are faster than video). This is a constraint on our adversary- they cannot deploy this sort of messaging at an instant.”

AI also helps develop convincing personas for distributing information up to synthesizing realistic images for accounts.  Hwang cautions, “Deepfakes are more likely to be deployed at critical moments than as “swarm attacks,” since releasing deepfakes helps the training of detection software.” Instead, he states, “Deepfakes will be saved for when they can have the most impact.” While deepfake videos are likely to be influential, the panelists contended that AI generated media, like news articles and memes, will also be damaging to society.

Because disinformation starts at home, policy will need to address societal resilience and include a

broad coalition of thinkers from educators to technology experts to policymakers. To combat disinformation, the panelists suggested developing criteria on when IOs are effective enough to warrant intervention, i.e. a framework on when an IO becomes strong enough to warrant a “red alert.”

Panelists emphasized that any solution to information operations would need to leverage the whole of society and include multiple government agencies. They listed the Department of Defense, the Department of Education, and Department of Homeland Security as important actors in this space. They also highlighted how technology could be leveraged as a solution, including “disinformation alerts” to notify media consumers of potential campaigns.

The Mad Scientist Information Warfare Virtual Series capstone conference takes place on July 21, 2020. The virtual conference, co-sponsored by Georgetown University and the  Center for Advanced Red Teaming, University at Albany, SUNY, will feature opening remarks by General Funk, Commanding General of U.S. Army Training and Doctrine Command, as well as  discussions on understanding the information environment and solutions to this new challenge, as well as guest speakers from Keith Law of the Athletic; Lewis Shepard, VMware; the Center for Advanced Red Teaming; Kara Frederick, CNAS; Dr. Neil Johnson, DARPA; The Global Engagement Center, U.S. Department of State, and the Georgetown University Center for Security Studies. In order to participate in this virtual conference, you must first register here [via a non-DoD network].