The Engineering Behind BCIs (audio article)
A brain-computer interface (BCI) is any device that creates a link between the brain and external equipment. Different technologies enable the brain to send or receive information without relying on activities such as speech or vision. Some BCIs extract information from the brain, while others stimulate the brain.
BCI technology has been maturing over many decades. For example, electroencephalography (EEG), which measures the brain's electrical activity, was first demonstrated a century ago. Other technologies have been used successfully to stimulate the brain for therapeutic purposes, including cochlear implants to aid hearing and deep brain stimulators for movement disorders such as Parkinson's disease.
Recent advances have put BCIs into the headlines. High-profile advocates predict that BCIs will revolutionize our relationship with technology, [1] while the medical community is excited to develop new methods for treating neurological conditions. With these headlines, one might think of BCIs in terms of the science fiction inspired by Isaac Asimov or William Gibson. This article investigates the reality of and engineering behind these developments.
Interfacing with the Brain
Three general methods exist for creating an interface with the brain: noninvasive, semi-invasive, and invasive. EEG technology, though not an interface, is an example of a noninvasive method. Because EEGs can detect the brain's surface electrical activity through the skull, the equipment can be fitted externally on the user or patient. The relatively crude signals measured by an EEG detect overall brain activity and can be used to provide relatively simple mobility solutions, such as controlling a wheelchair.
However, noninvasive methods are limited in their capabilities. They cannot measure signals from deeper within the brain's structure. In addition, the surface signals become weaker or are distorted as they pass through the patient's skull and skin before reaching the detector. Semi-invasive methods, such as implanting electrodes under the skull but outside the brain, provide higher-quality signals but require surgery. Any invasive or semi-invasive procedure, regardless of how routine it is, carries some element of risk, including damage to healthy tissue, infection to the user, and possible rejection of the implant. The use of noninvasive techniques eliminates these risks. However, the ability to collect data from electrodes near individual neurons offers the potential for a deeper understanding of how the brain works.
In invasive methods, sensor electrodes are implanted directly into the brain, and each sensor can record hundreds of sites. Because the detected signals are electrical, the sensors must be conductive. Modern technologies for invasive BCIs use recording sites comprising a carrier made of silicon or insulating polymers such as polyamide. The sensors are either metallic, using materials such as iridium oxide or platinum, or coated with a conducting polymer such as poly(3,4-ethylenedioxythiophene).
This invasive technique allows signals to be detected from within the brain's structure in three dimensions. To better understand how this is achieved, one must understand how the brain functions. The building blocks of the brain are neurons; each neuron can be described as a balloon of electrical charge. When the neurons are excited, they spike or fire a stereotyped action potential, which results in changes that can be detected.
These detectable signals are very small. Even a large spike may be as little as 100µV and have a very short duration of approximately 1ms. The spiking behavior of each neuron is stereotyped and consistent and can be detected by an array of recording sites, each one close to a different neuron or set of neurons. Spikes will occur at different times as neurons fire, and each spike will be recorded on a nearby electrode. The overall effect creates a field that is the sum of all the synaptic inputs in that brain region. Therefore, while individual neurons can be considered elemental, the brain's overall function requires the neurons to work in concert to create the field of potential.
Even with the increased sensitivity attained through invasive techniques, targeting the right part of the brain presents problems depending on which function the equipment is intended to measure or augment. Memory is possibly the most challenging problem. Despite considerable research into how the brain processes memories,[2] this function remains poorly understood and hard to measure, especially owing to the distributed way in which the brain appears to store information.
Other aspects of the brain's function can be measured more easily. The brain processes the senses in a more localized and immediate fashion, so each region can be targeted more successfully. Vision provides an example of this more targeted approach, and ongoing research seeks to produce artificial sight by stimulating the primary visual cortex.[3]
This work is made easier because these stimulations are immediate. Whether visual, audio, or some other form of stimulation, the brain responds instantly with no need to store information to recall later. This makes it easier to measure and stimulate these areas of the brain to produce a usable result.
While many techniques exist for detecting information within the brain and even stimulating the brain to produce a response, success depends on the interface between the brain and the computer as well as how data are interpreted. Therefore, the choice of strategy depends on the intended outcome—but the comfort and convenience of the technology must be a major consideration. Noninvasive devices can be used with little long-term impact on a user's quality of life. For example, an EEG device is worn like a cap and removed once its use is no longer necessary. This lends itself well to therapeutic applications for BCIs, and EEG devices are useful for diagnosing conditions such as epilepsy and stroke. These devices are also ideal for lifestyle and wellness applications like sleep analysis.
Processing with the Computer
The brain is an electrical organ. It processes, stores, and reacts to information. In that way, it is like a computer, but we must not follow that analogy too far. The brain does not work like a microprocessor does, with transistors flipping between two states. However, the brain is similar enough that we can use technology to interface with it. The brain is a system that uses electrical signals that we can detect and understand. While the necessary engineering is complex, using technology to extract data from the brain is still possible.
The computer in a BCI forms the central hub that processes and interprets signals to enable communication between the brain and the outside world. It performs signal acquisition and data processing tasks. Beyond handling raw data, interpretation is the most significant task the computer faces.
The first parts of the process are relatively easy to automate. Signals detected within the brain, whether by invasive or noninvasive methods, are small. Often recorded as simply fluctuations in voltage, these signals must be amplified and filtered from the noise created by the body. For example, all muscle movement is stimulated by electrical signals that interfere with the information collected from the brain.
Once the signals have been captured, identifying what they mean begins. A microprocessor can handle a lot of real-time brain information, and researchers are investigating how the dataset could be reduced while still providing useful functionality. [4]
Furthermore, significant differences exist in BCIs' intended purposes, the volume of data they require, and the speed with which these data must be processed. One of the key demands for any computer system is extracting the meaning from the raw data, whether corresponding to specific mental states, commands, or intentions.
Artificial intelligence (AI) is well suited for analyzing large and noisy datasets, such as brain signals. AI algorithms can identify patterns, filter irrelevant noise, and decode neural activity to translate thoughts or intentions into practical commands with high precision. This is crucial for applications in which BCIs control prosthetic limbs or provide communication.
AI will also be key in understanding the nonlinear relationships between brain activity and desired outcomes, which is especially critical for instances in which the brain has been damaged. While the structure of the brain is well understood, the damage caused to the brain by strokes and other injuries will be unique to each patient. Fortunately, the plasticity of the human brain allows it to reorganize itself. Damaged sections can be bypassed through rehabilitative training, where repetition helps build stronger links between healthy neurons. This process, known as Hebbian learning, helps restore motor control.
The marriage of AI and BCIs will be the driving force that makes brain-machine communication a practical solution. As AI continues to evolve, it will enhance the capabilities of BCIs, bringing us closer to applications that were once the realm of science fiction—from thought-driven communication to new levels of human-computer collaboration.
Conclusion
The design and manufacture of practical BCIs combine many technologies, but the implications go beyond engineering considerations and must include the safety and comfort of the user. Biocompatibility and long-term stability, particularly for invasive devices, must be addressed to ensure these systems interact seamlessly with the brain without creating safety concerns. At the heart of this challenge, the choice between invasive and noninvasive techniques is complicated by the differing capabilities they provide.
Capturing the information is just the first step. How the data are interpreted and the speed with which they are processed will have major implications for how useful these technologies will become. AI and its ability to find patterns within vast amounts of data may be critical to BCI success, but other innovations will be just as important. The miniaturization of hardware—which allows lightweight, portable, and comfortable devices to be created—will play an important role in BCI acceptance by the public.
Whether these technologies are confined to medical and therapeutic applications or eventually support everyday interfaces between humans and computers, such as those imagined by William Gibson, remains to be seen. However, advances in materials science, AI, machine learning, and signal processing offer the chance for significant progress.
[1] https://www.independent.co.uk/tech/elon-musk-neuralink-brain-computer-interface-b2486983.html
[3] https://www.sciencedirect.com/science/article/pii/S0092867420304967
[4] https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2021.578875/full