Early last year, Ivan Martinovic, then a postdoctoral researcher in the Security Research Lab at UC Berkeley, found himself wondering about the security implications of a headset used with high-end gaming consoles. The Brain-Computer Interface supports nonmuscular communication between a wearer and an external device, essentially allowing communication through thought — an obvious draw for gamers and other novelty seekers.
Would it be possible, Martinovic wondered, for a hacker to launch an outside attack directly on a user’s brain? The device, after all, reads brain waves. Could this consumer-grade device be hijacked to compel users to unknowingly reveal private mental data?
He consulted his old friend Tomas Ros, a neuroscientist at the University of Geneva who specializes in the use of BCI devices in the treatment of brain disorders.
It might be possible, Ros surmised, but they would have to design and perform experiments to know for sure.
And so Martinovic, Ros and a team of UC Berkeley security researchers decided to do just that.
“It’s the typical mindset of a security researcher,” explained Mario Frank, a postdoctoral researcher at UC Berkeley and a co-author of the study. “You get a new device and think of how you could attack it.”
Potential for attack
Study participants were presented with various questions on a computer screen, such as what part of Berkeley they lived in or the first digit of their PIN. Possible answers were then rapidly flashed before participants, triggering brain signals that the device could pick up.
Using the BCI device, the research team was able to detect P300 brain waves — which the brain emits when an individual recognizes a meaningful stimulus — and harness that information to infer personal data from wearers.
Programs developed by the researchers were able to correctly infer participants’ regional home location 30 percent of the time, their month of birth 60 percent of the time and the first digit of their PIN 20 percent of the time.
While the alarming results were nowhere close to certain, the fact that the researchers were able to harness personal mental data from a device that retails for about $200 makes the study the first of its kind.
“The simplicity of our experiments suggests the possibility of more sophisticated attacks,” the authors concluded in their paper, grimly titled “On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces.”
In other words, the researchers said, the study could have enormous implications for the future of multiplayer online games and personal security as a whole.
“It’s not a realistic threat at the moment,” said Martinovic, who is now a faculty member in the computer science department at the University of Oxford, “nor in five years for the reason that attackers have much simpler ways to get data, like phishing attacks on websites or database updates.”
Not now, perhaps, but in coming years, so-called “brain hacking” might become a real concern.
“As with many technological innovations, new attacks could emerge as technologies evolve,” said UC Berkeley EECS assistant professor Dawn Song, who specializes in computer security and was an investigator for the study.
Staying ahead of the hackers
The real threat, researchers emphasized, will not come from the device itself but from the application market for it.
The app market for BCI devices is much like an Android market, meaning that anyone can design and upload games for others to download. The researchers believe that for “brain hacking” to be of real concern, applications would have to be uploaded that surreptitiously present users with stimuli meant to trigger the P300 waves. Hackers could then harness the waves for malicious purposes.
“In order for such an attack to be a real threat, there must be more users, more apps and the device must be more precise,” explained Frank, who works in Song’s lab.
In the study, the investigators had to provide answers to the questions they posed in their experiment — like flashing all 12 months in which a person could possibly have been born — in order to provoke the reactionary brainwaves that would allow them to guess a user’s personal data.
“A killer might respond more vigorously to the face of someone they killed rather than someone they had never seen,” Ros said. “If you hear your own name, your P300 will be larger than if you hear other people’s names.”
For their attacks to be successful, hackers would need to perform their assaults covertly, as users would not actively give away personal data if they were aware of the phishing attack. Thus, the researchers are trying to stay one step ahead of hackers by testing out the success of more clandestine stimuli in a second phase of the study currently being conducted.
“We are continuing this line of research and looking for further ways hackers might be able to assault users’ brains,” Frank said.
With the constantly improving quality of BCI devices, the success rate of attacks will likely get better. Needless to say, emails from Nigerian royalty politely requesting bank account information are starting to look quaint.
In their paper, the researchers conclude with an ominous warning.
“The development of new attacks can be achieved with relative ease,” they write, “and is only limited by the attacker’s own creativity.”
Sara Grossman is the lead research and ideas reporter. Contact her at [email protected].
Comments should remain on topic, concerning the article or blog post to which they are connected. Brevity is encouraged. Posting under a pseudonym is discouraged, but permitted. The Daily Cal encourages readers to voice their opinions respectfully in regard to the readers, writers and contributors of The Daily Californian. Comments are not pre-moderated, but may be removed if deemed to be in violation of this policy. Click here to read the full comment policy.