Army's MIND Lab able to decode brain waves

By C. Todd LopezNovember 6, 2015

Army's MIND Lab able to decode brain waves
Dr. Anthony Ries instructs Pfc. Kenneth Blandon on how to play a computer game, using only his eyes to control the direction of fire of a bubble-shooting cannon at Aberdeen Proving Ground, Md., Nov. 3, 2015. Ries is a cognitive neuroscientist, who st... (Photo Credit: U.S. Army) VIEW ORIGINAL

ABERDEEN PROVING GROUND, Md. (Army News Service, Nov. 5, 2015) -- In an Army Research Laboratory facility here called "The MIND Lab," a desktop computer was able to accurately determine what target image a Soldier was thinking about.

MIND stands for "Mission Impact Through Neurotechnology Design," and Dr. Anthony Ries used technology in the lab to decode the Soldier's brain signals.

Ries, a cognitive neuroscientist who studies visual perception and target recognition, hooked the Soldier up to an electroencephalogram - a device that reads brain waves - and then had him sit in front of a computer to look at a series of images that would flash on the screen.

There were five categories of images: boats, pandas, strawberries, butterflies and chandeliers. The Soldier was asked to choose one of those categories, but keep the choice to himself. Then images flashed on the screen at a rate of about one per second. Each image fell into one of the five categories. The Soldier didn't have to say anything, or click anything. He had only to count, in his head, how many images he saw that fell into the category he had chosen.

When the experiment was over, after about two minutes, the computer revealed that the Soldier had chosen to focus on the "boat" category. The computer accomplished that feat by analyzing brainwaves from the Soldier. When a picture of a boat had been flashed on the screen, the Soldier's brain waves appeared different from when a picture of a strawberry, a butterfly, a chandelier or a panda appeared on the screen.

TOO ... MUCH ... DATA

Ries said that a big problem he sees for the intelligence community is the vast amount of image information coming in to be analyzed - imagery from unmanned aerial vehicles or satellites or surveillance aircraft, for instance. Everything must be looked at and evaluated.

"Our ability to collect and store imagery data has been surpassed by our ability to analyze it," Ries said.

Ries thinks that one day the intelligence community might use computers and brainwaves, or "neural signals," to more rapidly identify targets of interest in intelligence imagery, in much the same way the computer in his lab was able to identify pictures of "boats" as targets of interest for the Soldier who had chosen to focus on the "boats" category.

"What we are doing is basically leveraging the neural responses of the visual system," he said. "Our brain is a much faster image processor than any computer is. And it's better at detecting subtle differences in an image."

Ries said that in a typical image analysis scenario, an analyst might have a large image to look over, and might accomplish that by starting at the top left and working his way down, going left to right. The analyst would look for things of interest to him. "It takes a long time. They may be looking for a specific vehicle, house, or airstrip - that sort of thing."

What Ries and fellow researchers are doing is cutting such an image up into "chips," smaller sections of the larger image, and flashing them on a screen in the same way the boats and pandas and butterflies appeared on the screen for the Soldier.

"The analyst sits in front of the monitor, with the electroencephalogram on measuring his brain waves," Ries said. "All the little chips are presented really fast. They are able to view this whole map in a fraction of the time it would take to do it manually."

The computer would then measure the analyst's neural response to each chip viewed.

"Whenever the Soldier or analyst detects something they deem important, it triggers this recognition response," he said, adding that research has shown that as many as five images per second could be flashed on the screen, while still getting an accurate neural response. "Only those chips that contain a feature that is relevant to the Soldier at the time - a vehicle, or something out of the ordinary, somebody digging by the side of the road, those sorts of things - trigger this response of recognizing something important."

Images identified by the analyst's mind as being of-interest would then be tagged for further inspection.

The automated system could greatly reduce the amount of time it takes to process an image, and that means that a larger number of images - more of that gathered intelligence data - can be processed sooner, so that it can more quickly be of value to Soldiers on the ground.

When Ries and his fellow researches cut a larger intelligence image into smaller parts and display them in rapid succession to an analyst, the analyst still has to look at the entire image - the same number of square inches of image overall. But Ries said that by cutting it up into smaller chips, and displaying it rapidly, they are taking much of the work out of accomplishing the analysis.

Instead of sliding his fingers over the image, or marking on it, or writing something, or typing, the analyst has only to think "of interest" or "not of interest." And that kind of decision can be made almost instantly - and a computer hooked to an EEG can detect when that decision has been made, what the decision is, tag the image with the result, and then present the next image in just a split second.

ELIMINATING NOISE

Ries' particular research is finding out how other things an analyst might be doing as he does image analysis might affect the neural signal his brain generates.

When Ries' Soldier volunteer initially put on the EEG sensors, he put up on the computer screen the output of the device - a series of what looked like sine waves moving across the screen. When he asked the Soldier to clench his jaw, the waves on the screen changed immediately and dramatically. This was due to the extraneous noise induced by muscle activity in the jaw that was picked up by the EEG sensors.

While what was on the screen was in fact the Soldier's brainwaves, jaw clenched or not, the extra stimulation of a clenched jaw on the output of the EEG could make it difficult for the researcher's software to detect the important neural signals when accompanied by extraneous noise. Ries called the extraneous signals "artifacts."

What Ries is looking at is how other types of tasks influence the neural signals related to target recognition. For example, what happens to the neural signal as a result of the analyst having to listen to somebody talk while they are at the same time trying to do image analysis work? He wants to figure out what needs to be done, and what information needs to be gathered, so that the algorithms that make their work possible can be adjusted to remain effective.

"Maybe you have an analyst who is looking at an aerial image, but is also listening to auditory communications," Ries said. "How does multi-tasking affect the target recognition response? If we can characterize the way different task loads affect the response, we can update our classification algorithms to account for that."

Ries and fellow researchers are also working on a way to incorporate eye movement into their work.

Where one Soldier had volunteered to look at an array of images on a screen, another volunteered to play a game on a nearby computer. The goal was to shoot a "bubble" of one color at a cluster of other bubbles at the top of the screen. Where multiple bubbles of the same color touched, they would fall away. Typically the game would be played with a mouse or keyboard. But in this instance, it was the Soldier's eyes that told the bubble where to go.

Ries told the Soldier to simply look on the screen at where he wanted the game to "shoot" the bubble, and that would be where the bubble went. And that's exactly what happened.

Like a clenched jaw, eye movement also introduces artifacts into a neural signal. But if Ries and fellow researchers can feed into their algorithms when an analyst's eyes are moving, and also where an analyst's eyes lock in on a computer screen, that can help improve intelligence work.

"One thing we have done is instead of having people view images at the center of the screen, we're leveraging eye-tracking to know whenever they fixate on a particular region of space," he said. "We can extract the neural signal, time-locked to that fixation, and look for a similar target response signal. Then you don't have to constrain the image to the center of the screen. Instead, you can present an image and the analyst can manually scan through it and whenever they fixate on an item of interest, that particular region can be flagged."

"We want to create a solution where image analysts can quickly sort through large volumes of image data, while still maintaining a high level of accuracy, by leveraging the power of the neural responses of individuals," he said.

Related Links:

Army News Service

Rucksack may someday power Soldiers' gear (Another ARL project)

Army.mil: Science and Technology News