Skip to main content

Ethical Concerns for Brain-Computer Interfaces

Mouser Electronics recently talked to Helene Huang, senior research associate at the Centre for Trustworthy Technology, who specializes in the ethics of neuroengineering. We were keen to understand whether the future of brain-computer interfaces (BCI) presents ethical concerns. We asked Huang for her insights into the ethical considerations of developing and using BCI.
 
Mouser Electronics: Is new technology placing new ethical demands on us?

Helene Huang: BCIs are not new. The electroencephalogram (EEG) was first demonstrated 100 years ago, and we’re still using it today. And invasive technologies have been in use for decades too. Cochlear implants help thousands of patients suffering from hearing loss—installing them requires an invasive procedure.

The distinction between invasive and noninvasive technologies is important when we think about the ethical use of BCIs. A noninvasive device is usually something worn by the user, and it’s completely removable. Some devices are commercially available now that use EEG sensors to measure brain activity and provide feedback for promoting relaxation and improved sleep. 
Does that mean that consumers will expect to have access to BCIs?

Most of the BCI applications that capture the public imagination are based on invasive technologies. Whether it’s for controlling prosthetic arms or helping patients with neurological disorders, the technology is mostly going to be invasive. In fact, invasive techniques are used only in medical applications.

Placing a BCI directly into the brain requires a medical procedure. To risk this kind of operation, we need to be clear [about] how the implant will be used. Even in the most groundbreaking clinical trials, surgeons have to consider the well-being of the patient. Inserting devices into a complicated structure like the brain will create risks.

Will invasive BCI technology be a purely medical technique?
A lot of the research into BCIs is focused on medical uses. Even then, surgeons must consider the risk of a procedure and balance it with the potential benefits. We’ll probably find plenty of healthy users who want to create an interface between their brain and the technology they use, but [finding] a surgeon who would be willing to conduct such an operation without a clear clinical need [is unlikely].

We also have to look at the long-term care of the user. The human body is very good at repairing itself. It tries to remove foreign objects that might harm it. Surgeons battle with this problem every time they perform a transplant—their patients need lifelong monitoring to make sure the organ isn’t rejected. Even patients who have simple operations like hip replacements must be vigilant for signs of long-term complications. The aftercare and long-term monitoring of a patient who has undergone invasive surgery on the brain demand the same [attention]. A commercial company taking on lifelong responsibility for the care of its customers is hard to imagine.


I think that invasive technology is unlikely to reach the commercial marketplace soon. I’ve spoken to professors who believe it won’t ever happen.

BCIs collect information from the brain. Do we need to consider a new approach to protecting these data?
We do need to consider the data and what they will be used for, but the information being collected isn’t special just because it comes from the brain. Neural activity is just another form of biometric data.

I can already track my brain waves and see if I’m tired or sleepy. That’s not much different [from] the heartbeat or blood oxygen data that get measured by my smartwatch. Biometric data are private, but the medical industry already ensures that those data stay secure. I think that the [US] Food and Drug Administration does a wonderful job of regulating certain medical technologies and devices, and the processes it has set up are based on decades of experience. They’re not foolproof, but I would say they are pretty reliable in terms of determining the rights and safety of the patient. I don’t believe that we need a separate set of guidelines for neural activity.

I think this is something by which people are getting [distracted]. A lot of people are discussing ethical questions that we just don’t need to address now. People want to talk about privacy in BCIs. What if this technology allows you to record people’s thoughts? That sounds like an invasion of privacy, but the reality is that we do not have the capacity to record people’s thoughts using BCI technologies. We need to be clear on the topics that matter.

So, you do not think we should be worried about BCI technology being misused? 
As BCI technologies and interpretation tools advance in the future, there is a possibility that captured data could be leveraged to accelerate targeted marketing or profiling. For now, the privacy risks associated with noninvasive BCI data are like those for other biometrics data like heart rate data from wearable biosensors. Data collected from invasive BCIs should be protected in the same way as other sensitive medical data. Even without BCI technologies, the data tracked on our personal devices already exist to do that.

Do you feel that we do enough to limit the technology, as opposed to limiting the user of the technology?
The most immediate question is whether FDA regulations currently governing medical devices are sufficient for BCIs. However, a BCI device is not autonomous. It’s just a chip; it records neural signals. Some bidirectional BCIs are recording both ways—you can send signals to the computer and the computer can send signals to you—but without the human, the BCI chips that I’m aware of right now can’t act on their own. That’s a good question for something like artificial intelligence. But for BCIs, the whole point is to enable the user. If BCIs were to advance to the point where they become autonomous, then we will have a lot more ethical questions that we have to address, and that would be an entirely different conversation than the one we had today.

How do you think future developments will change our view of the ethical use of BCIs?
For the moment, any concerns are limited by the technology. We cannot yet use BCI to detect thoughts, and we can’t change memories. We don’t even fully understand how the brain processes and distributes them. But research is being conducted all the time, and it is possible that we will develop a technology that we don't fully understand. 

We still don’t really know about the long-term effects of invasive BCI, and there may be no way for us to do so while we do not fully understand the system we are studying. With that in mind, we must be aware of our role in the development of these systems. As engineers, technologists, innovators, and clinicians, we must try our best to address each ethical question as it arises.

Helene, thank you so much for your time and advice.

About the Author

Helene Huang is a senior research associate and graduate student at Johns Hopkins University, studying biomedical engineering with a concentration in neuro-engineering. Her primary research focuses include artificial intelligence in medicine, synthetic biology, and other advancements in bioengineering. She’s particularly interested in neurotechnology, like brain-computer interfaces, and is a member of the student and postdoc committee at the Brain-Computer Interface Society.

Profile Photo of Helene Huang