Bridging minds and machines
Raymond Yin:
Welcome to another season of The Tech Between Us. I'm Raymond Yin, your host and director of technical content at Mouser Electronics. We have another exciting year lined up and look forward to exploring the latest technology topics with experts working in those areas. We hope you join us, and with that, let's get started.
Most listeners of this podcast know that I'm a huge Star Trek fan. To me, one of the great tragedies of Trek is the ultimate fate of the second captain of the USS enterprise, Christopher Pike. For those who aren't familiar with the storyline, spoilers ahead. Pike Heroically rescues cadet during an inspection accident, but has his body mangled by exposure to delta rays. He is left unable to move and confined to a futuristic wheelchair, only able to communicate by blinking an LED once for yes, and twice for no. According to Dr. McCoy, “we've learned to tie into every human organ in the body, except one, the brain. The brain is what life is all about.”
We may not have warp drive or transporters … yet, but we are actually ahead of that particular world of Trek and being able to understand the brain and tie into it to some degree. That's the goal of today's topic: Brain Computer Interfaces. To help us understand how scientists and engineers are accomplishing this feat we have with us today, Dr. Dan Rubin, critical care neurologist at Massachusetts General Hospital and instructor at the Harvard Medical School. Dr. Rubin has both an MD and a PhD in computational neuroscience from Columbia University, and is also a member of the BrainGate Consortium researching Brain Computer interfaces. Dan, welcome to the Tech Between Us.
Dan Rubin
Thanks for having me. I'm thrilled to be here.
Raymond Yin
You certainly do wear many hats. Can you tell us a little bit about your research areas and more specifically what BrainGate does?
Dan Rubin
From a research perspective, I work on developing intracortical brain computer interfaces to help restore mobility, communication, and function to people who have paralysis. And that's really what the BrainGate Consortium and the BrainGate Clinical trial is all about. So since roughly 2004, we've been working on developing a way of extracting neural signals using an intracortical sensor that measures the brain's activity. As people with paralysis try to make different movements and actions, and then use computer systems to decode what they're trying to do and give them the ability to control different types of assistive technology, be it robotic arms, computer interfaces, text generation devices, and the like.
Raymond Yin
Okay. Wow. 2004, I did not realize that research had been going on for quite that long into the current style of implanted BCI.
Dan Rubin
Yeah, it's been a long road to some extent. You can sort of trace the history of brain computer interfaces all the way back to the history of neuroscience for as long as people have been investigating the brain and the nervous system and trying to understand how does it work, how does a collection of individual cells encode conscious movement, conscious thought, our memories, you name it.
The name of the game has always been, how is this information actually encoded, stored, transmitted? What do we do with it? And for a while, back in the 1960s, 1970s, when neuroscientists first started recording from brain cells, they were limited to recording from one or maybe two or three brain cells at a time. And you could extract a little bit of information and say, oh, this is a brain cell that seems to be very interested every time that this, say lab animal is looking at a visual stimulus, now this brain cell starts to get very excited and you can understand the tuning properties of individual brain cells, but it wasn't really until the early ‘90s that we had the ability to record from populations of brain cells hundreds or more at a time. And that's really what gave birth to the modern era of brain computer interface technology. Where we can think about decoding an entire population of brain cells to understand a more complex movement.
Raymond Yin
And I've seen some of the earlier, I'm not sure if they're earlier or just a different style where the patient wears kind of a helmet with kind of EEG sensors all over it. And obviously more recently the more implanted versions are getting more press. How different are those two techniques?
Dan Rubin
That's a great question. And I think when thinking about BCIs, that's sort of the first fundamental distinguishing point is between implantable and non-implanted systems. So non implanted systems are those that rely on generally scalp EEG sensors. And those have been around for a while, and there's been a lot of research with them because they're not implanted. And so the risks of working with them are very low and it's very accessible. But there are some sort of fundamental limitations to what you can actually do with those, mainly due to the physical properties of the human body. So individual brain cells, neurons communicate to each other with action potentials, these little spikes in electrical activity.
And these are fast, these happen on the order of a millisecond or two. And so when you want to record from brain cells, it's important to have access to the electrical activity at that temporal resolution at the order of milliseconds of activity. The problem is that when you're using a scalp EEG system, you're measuring the electrical activity of those brain cells from outside of the scalp. So the signal has to travel through the skull and through the scalp and through the skin, and it gets highly attenuated, and the scalp happens to be … in the bone the skull … happens to be a really powerful low pass filter. So by the time the signal get out, you lose anything that's really got a frequency under about 40 hertz or so.
Raymond Yin
Okay, so you lose a lot of resolution.
Dan Rubin
Exactly. And so what you're measuring is the average activity of large populations, about 10 square centimeters of cortex, hundreds of millions of neurons at once. And so you lose a lot of specificity. And ultimately, from a sort of pure information science standpoint, there's just a very low center of noise in that signal. And so, they have been used to develop some sort of low latency communication tools, but most of the use for that type of BCI, the non-implanted BCI, has been geared more towards consumer grade technology and using it as a way to sort of augment the way that we otherwise interact with computer systems in a more traditional way. And that's in contrast to the implantable BCI systems, of which there's a few different types that have been devised, some that use micro electrode arrays, where you actually have tens to hundreds of individual electrodes sitting right next to brain cells recording from populations of individual cells, or some that use a type of sensor called an electrocorticography sensor that looks kind of like an EEG sensor, but it actually goes directly on the cortical surface underneath the bone. And so, you do have access to those high temporal resolution signals. These types of devices have much, much higher signal noise ratio. You can do a lot more with the signal, but there's a risk, you have to have some sort of surgery in order to have this sensor place.
Raymond Yin
Now, how much risk is involved? Obviously, you got to cut a hole in the skull and implant the sensor. How much risk is there or is it getting to the point where it's kind of everyday thing now?
Dan Rubin
It's a great question. It's a bit of subjective question to answer. The risks are the sort of risks that are associated with other relatively frequently performed surgeries. So, although when one has an implanted BCI placed nowadays, that would all be in the context of a clinical trial, right? You can't just go to your doctor, ask for one, you have to be enrolled in a clinical trial in order to have one of these. But the risks associated are not, although it would be an experimental surgery, investigational surgery, the surgical technique itself is not particularly new or noteworthy. And in fact, the types of sensors that are used for BCIs have been used and are FDA cleared for use in clinical indication. So people, for example, who have severe epilepsy who are considering a surgical treatment for their epilepsy, could go and have these sensors placed in an operating room by a surgeon, and it would be a sort of generally indicated medical procedure. There's nothing unusual about it. The only real difference with BCIs is that the sensors are left in place, and the person comes out of the operating room and goes home.
Raymond Yin
Okay. So, the sensors procedure has, if it's been FDA cleared, it's gone through multiple clinical trials and safety.
Dan Rubin
And I should be explicit about this. So certain sensors have been FDA cleared, others are not. And the ones that are FDA cleared are cleared for short-term use. So, when you see them being used in the context of a BCI, that's still investigational, and the FDA is allowing those research studies to occur, but under the auspices of an IDE, investigational device exemption. So, these are devices that have been studied in the context of FDA clearance for short-term use. They're generally very well tolerated. If you look at the adverse events that have been reported in clinical trials about these technologies, they're the sort of things that you would associate with most implanted circle devices. So there's sometimes there's irritation around the site of placement, there's sometimes some skin reaction to when you have a surgery, but there haven't been any changes in neurologic function, for example, as a consequence of having these devices placed. We, in the BrainGate trial, published what we believe to be the largest safety report of an implantable BCI system, where we looked at all of the adverse events that we reported in our population at the time we published, it was 17 years of data with over 12,000 participant days of safety events. And the vast majority of adverse events that we noted, over 80% had nothing to do with the device at all. There were just unfortunately things that sometimes happen to people who have paralysis, things like infections, and of the device related adverse events that were reported, more than half of them were again, just sort of skin irritation around the site of implantation. But there was no one who had any worsening neurologic function as a result of having these sensors placed. There's no one who had an intracranial infection, no one needed to have the device removed as a consequence of some adverse event.
Raymond Yin
Really?
Dan Rubin
So, yeah.
Raymond Yin
Oh, okay. That's actually a pretty good record there.
Dan Rubin
And with credit to all the investigators who were very closely monitoring all of our participants in the clinical trial. But yeah, it seems to be, at least in our hands, on par with other approved implanted technologies, things like deep brain stimulators for Parkinson's or cochlear implants for people who have hearing loss. We see the same sort of adverse events, and we've taken all the appropriate precautions to mitigate any other ones.
Raymond Yin
Okay. Yeah. So outstanding. So yeah, there is some risk, but it's not like risk has been minimized, then risk is understood and we're able to explain to the participants what could happen.
Dan Rubin
Exactly.
Raymond Yin
Okay, terrific. You had mentioned talking when measuring signals from individual neurons, these events are one millisecond long, and as an electrical engineer, I'm just kind of curious. Electrically, what does the brain look like? I mean, what do the signals look like? What kind of bandwidth are you guys, how much bandwidth do you need to be able to extract enough information from the brain to be able to gather the right amount to create your models and whatnot?
Dan Rubin
That's a great question. So in our experience, it seems perhaps unsurprisingly, the more individual neurons, the more individual brain cells that we can record from, the more we can do, the more information we can capture.
Raymond Yin
Sure. Makes sense.
Dan Rubin
The types of sensors we use, we record from anywhere from 256 to 384 channels. We stream the data at 30 kilohertz, which is pretty high bandwidth. So, there's a lot of data that's coming through. Then how you actually extract useful information from that signal is the art of BCI. And so, there's a lot of pre-processing that goes into it. So, we down sample the signals, we filter it into relative frequency bands.
Dan Rubin
So, we're streaming data at 30 kilohertz, so there's a lot of bandwidth there,
Raymond Yin
Right? I mean, 384 channels at 30 kilohertz. Yeah. That's a lot of data.
Dan Rubin
That's a lot of data. If someone were to use the system for a day and you would record all of that, you're talking in the hundreds of gigabytes of data per data you're generating, but we can't analyze that in real time in that format. Again, it is just too much data. So, you down sample the data, you extract the relevant features, and that's sort of a big part of developing BCIs, having a thoughtful way to pull out the most informative signals. And so there's different ways that people do that. One approach is to band pass filter the signal in different spectral bands, and these have old fashioned names from the epileptologists who first developed these, but there's the lower frequency bands, the Theta, Alpha, and Delta bands, and that's mostly stuff that's used by the folks who use the non-implantable systems. And those are down in the 4 to 40 hertz range. And again, there's only so much information that's in there when we're working with the intracortical devices. We're looking more at the higher frequency band, so the 250 to 5K frequency band. And then if we've got really good signals, we're looking at individual action potential. So your initial question is what are these signals actually look like? And they're little, it's hard to describe without showing a picture. There's squiggles of voltages with these brief …
Raymond Yin:
Just impulses.
Dan Rubin:
Yeah, and people call them spikes. The most straightforward way to actually figure out which signals you should use is to just look for those spikes, look for excursions in the voltage that exceed two or three standard devs. You count those up in some 10 or 20 millisecond time in and you just keep a running tally of those. And if you down sample your signal all the way to that, all of a sudden you've got a signal that you can work with much more quickly. And then that's what you ultimately train your models on, whether it's a simple linear model or a more complex multilayer RNN. But those are the signals that are ultimately used to try and decode the relevant action that you're asking your person to try to do.
Raymond Yin
So, the system, obviously these sensors are incredibly sensitive. About what voltages, I mean, we're talking millivolts, microvolts, coming directly off the brain?
Dan Rubin
So, these are in the microvolt range.
Raymond Yin
So really small.
Dan Rubin
Really small. Somewhere in the zero to 200 microvolt range is typical depending on the quality of your sensors and how close they are to the source of your signal. When we're looking at, for example, the raw signals, when we've first got a new system set up, we want to make sure the signals look good. We're always pleased when we see big strong signals in the 200, 250 microvolt range. But if they're smaller, then we just work with the smaller signals. That's okay too.
Raymond Yin
Right, and that really makes a lot more sense for the external BCIs, I mean, going through, like you were saying, the skull, the scalp, the skin, the hair, and on and on and on. How something in the microvolt range, by the time it gets to the sensors is almost unusable at that point.
Dan Rubin
And that's why when you're using an external sensor like a scalp EEG, you have to be … you're summing the electro activity of, again, some population of tens to hundreds of millions of those sensors.
Raymond Yin
Right, right. Just to get enough signal to be able to extract from.
Dan Rubin
Exactly.
Raymond Yin
So now you've pulled the signal from the sensor, you’ve reduced it to a reasonable amount of data and you've done your spectral analysis, gotten it into the various Greek letters in the spectrum. Now from there, do you dump that information off to some massive computing device to be able to train a model of some kind. Would that be the next step?
Dan Rubin
So it depends a lot on what you're trying to do. The short answer is yes, but the more nuanced answer would be it would depend on what sort of task you're trying to decode. So much like in all of engineering, you'd like to use the least complicated model you can and go from there. So if you're trying to do something relatively straightforward, like for example, if you want to give someone with paralysis the ability to use the BCI to control a computer cursor on a screen, now this is something that if you don't have paralysis, you may take for granted that you can just pick up the mouse and move it around and the cursor will go where you want it to go.
But if you're working with a clinical trial participant or a patient who has advanced paralysis from something like a spinal cord injury or a neurodegenerative condition like ALS, they may be limited. They may not be able to move their hands at all. And in fact, for some of our clinical trial participants, the only volitional movement they can make is with their eyes. And so the ability to control a computer cursor on a screen is a huge potential improvement in quality of life because once you can control a computer cursor on a screen, you can put the cursor at letters on a virtual keyboard, you can communicate, you can load all sorts of communication software that we, again, use on a daily basis without thinking too much of it.
But controlling a computer cursor on a screen is a relatively straightforward decoding problem. You've got two dimensions in velocity, and that's what you wanted to code. And you've got, again, if you're recording from 384 neurons, you've got a very high dimensional signal that you can try and access that from. And so you can use various linear models or slightly more complicated than linear models and get a pretty good fit pretty quickly. And you can give somebody smooth cursor control. You don't need a particularly powerful computer to do that sort of model fitting. However, what the field of BCIs has really been moving towards is much more complicated control.
So, controlling a cursor on a computer screen is a great next step for restoring communication or functional independence for someone with paralysis. But again, to take the example of somebody who's got really severe paralysis, who can only move their eyes, this is someone who's lost the ability to speak. And what we can do now is we can record the neural activity from motor cortex, the part of the brain that controls movement, while we ask our participants try to talk. We recognize that you can't, that this is something that because of your paralysis you're not able to do, but, try to, for example, say the words that we're showing on the screen, just read these words and our participants, we will try their best and we'll record the brain activity, and then we can fit a much more sophisticated model. Here, it's not a simple linear model, it's a multilayer RNN to try and match the neural activity we're seeing with the words on the screen. And in essence, can train a more complex model to decode someone's intended speech.
And once the system has been trained and it's working, you sort of finish the training aspect. You've got your model built, and then we go to what we call a closed loop mode where we basically take the neural data that we're now gathering and just feed it into the model and see what output it infers. And in doing so, it gives somebody who's lost the ability to speak, a voice back. So now they try to talk, the system decodes their speech in real time and what they're trying to say appears as text on a screen.
Raymond Yin
So really, I mean, you're not trying to understand what they're trying to say. You're trying to understand the movements of what their muscle movements of their mouth, their tongue, whatever, and decoding that information and extrapolating what they're trying to say.
Dan Rubin
That's exactly right. There's different ways to approach it, but that's the current approaches to look at the activity in motor cortex and try and understand what movements of the muscles of articulation, the muscles of their face, the mouth, their jaw are they trying to make at any given point in time. And then from those inferred movements, we predict what sound they're trying to make, and we operationalize that by defining what phoneme they're trying to produce.
There's 39 phonemes in spoken English. And so, what you ultimately have is you have an RNN that's been trained to predict at each time step, and you can use, again, a 10 or 20 millisecond time bin. At each time bin what phoneme is this person trying to produce based on the muscles in their face that they're trying to move. And then once you have a reasonably accurate prediction of that, then there are large language models that can go the rest of the way and say, well, if the person was trying to produce this sequence of phonemes, and on top of that the last four words they said were A, B, C, and D, then we're going to predict that this combination of phonemes is going to give us word E. And as it turns out, language models are pretty accurate. And if your signal to noise ratio is good enough, you can get near perfect decoding of someone's intended speech.
Raymond Yin
All this is done in being done literally real time as the patient is moving their mouth or trying to move their mouth and other muscles. And the voice is coming out literally real time then.
Dan Rubin
That's right.
Raymond Yin
Okay. That's fascinating. Now, large language models, things like obviously, ChatGPT and Bard, these guys are connected to a server somewhere and require massive computing power even to do inference. How much computational power do you need to be able to do something like that?
Dan Rubin
That's a great question. Less than you'd think.
Raymond Yin
Okay, That’s great.
Dan Rubin
So, the systems that we use now, they're run on a computer in the room with the participant. It's not an off the shelf. It's not something you're going to pick up at Best Buy. So, it is custom made, and it's got a lot of memory. It's got a couple of powerful GPUs built into it, and it draws a lot of current and gets kind of hot. It draws enough current, and it's done safely. This is roughly $20,000 PC, so it's got a couple of GPUs, it's got a high-end motherboard and a lot of RAM. And this can all be done locally. So right now, we're talking about a clinical trial that we're running, and it's important the BrainGate clinical trial, the one that I'm part of, is a purely academically sponsored clinical trial. So this is run by academic labs. It's a consortium of groups at MGH, Brown University, Emory, UC Davis, Stanford, and the Providence VA. But there are a number of industry startups that are now going the next step and trying to develop devices that they hope to get formal FDA approval for that they will then be able to…
Raymond Yin
Able to commercialize everything and make this widely available.
Dan Rubin
And once that becomes the case, it may be that it becomes more sensible to try and do some of the heavy lifting in the cloud. So, we don't do our inference in the cloud, we've thought about it, but for the sake of really more than anything else, patient privacy, just keeping all of the neural signals in-house just resolves an issue or prevents us from creating the issue of privacy when it comes to sending this neural data to the cloud. Even if you send it anonymously, there's still all these questions about any sort of patient data that is generated or acquired in a clinical trial needs to be handled in such a way. And it just absolves that problem. Now, 20 years ago, you probably couldn't do the sort of inference that we're doing. The hardware didn't exist, and frankly, the algorithms didn't exist. We didn't have multilayer RNNs that could do this sort of stuff, but we've sort of reached a place where both the hardware and software are good enough to support doing it locally, and that's what we've been doing.
Raymond Yin
Obviously, things get a lot easier and a lot more efficient as technology advances. From an overall kind of quality of life for the patient. When you leave the implant implanted within the patient, are they tethered to something? What kind of mobility would they have?
Dan Rubin
That's a great question. So, it depends on what sort of device and which specific type of technology you're using. But in general, so all of the implantable BCIs, there is something implanted, some sensor either in or near the surface of the brain. And then that, generally, is connected either through some wires or through some tunneled other connection to some sort of transmitter that is implanted in the participant.
So, for our BrainGate clinical trial participants, there's a percutaneous pedestal, a little metal pedestal that actually is connected by a gold bundle of wires to the sensor itself. And that little pedestal sits on the outer surface of the skull, has a little cap on it. And if you saw one of our clinical trial participants not wearing a hat, you'd say, oh, you've got a little metal pedestal on the outside of your head. And so that's there all the time; that's implanted. Now, when they want to use the system, it takes a little cap off, and then there's a little place where you attach a little head stage and there's both a wireless or wired head stages, but it then sends the signals to the computer system. And so, when you're using, for example, our system, you need to either be within range of the wireless transmitter or close enough that a cable could connect you to the computer system. But again, this is purely in the context of research. The types of devices that are being developed by the major startups have recognized that a percutaneous connecting element is not a viable long-term solution. One, having a percutaneous pedestal creates additional risks that need to be mitigated. We mitigate the risk of infection from there being an opening the skin in our trial in one way. But it's something that you wouldn't want in a marketed medical technology.
So, all of the implantable devices that are being planned now have a fully implanted transmitting system. Most of them have taken the pathway of having the sensors in the brain and then tunneled through some wires to an implanted unit that sits in the chest wall. So right below the left shoulder is a place where it's very common that things like cardiac pacemakers, vagus nerve stimulators, there's a little thing there. If someone's wearing a shirt, you wouldn't know that there's anything there. Even if they weren't wearing a shirt, depending on how big they are, you may or may not notice that there's something there. Usually if you touch it, oh, there's something sort under your skin there. And then the different technologies usually use some sort of inductive system for both charging the device and for transmitting the information. So, some sort of unit that sort of sits outside the skin when you want use it, and then you take it off when you're not using it. So, in that sense, you connect to the system by just having something over that transmitting unit. And so, then the rest of the system you can take with you, it depends on how big the decoding computer is, whether or not that's something that you could, for example, have attached to a power wheelchair.
Raymond Yin
Right, it could be kind of like a little backpack or something like that, or on the back of the chair. The goal really is to make the equipment more manageable, so they're not permanently tethered to something.
Dan Rubin
That's right. That's one of the questions that a lot of people ask us when they ask about this technology: I going to be tethered to a computer system? One of the mainstays of assisted technology for people who have paralysis or people who have paralysis in particular that have lost the ability to speak, many of them use eye tracking systems. So computers that look at where your eyes are and basically allow you to use your eyes as a mouse cursor, and they work well enough. Some people have more facility with them than others. Sometimes it's because whatever caused weakness outside of your eyes also is affecting how well you can move your eyes. And so that can be a challenge for some folks.
They can also can be tiring. You sort of have to rather focus your gaze exactly in the right place at all times. You can't be looking off to the side, you can't be looking around. It can be hard to engage in a conversation. You can't make eye contact with the person you're trying to communicate with. And so for all these reasons, people say, well, is there other technologies that would be a little more liberating that's a little easier to use? And that's the model of communication that a lot of the BCIs are trying to compare themselves to and say, well, does this provide something that's easier to use or quicker to use or less fatiguing than an eye gaze system?
Raymond Yin
And that being kind of the benchmark, like you said today.
Dan Rubin
Exactly.
Raymond Yin
One of the more famous ALS patients is Stephen Hawking, and we've seen documentaries with him and his chair and him using it. Was he using an eye-gaze system or was he before some of these trials that you guys are conducting?
Dan Rubin
Yeah, so my understanding, and I could be corrected, so he didn't use a BCI to my knowledge. I don't think he used, he may later in life he used a BCI, I believe that he actually had a small but persistent amount of residual function left in one or two of his fingers and used that as the mainstay of communication. So he used that to control his communication system.
Raymond Yin
Got it. Okay. Like I said, I've seen documentaries on him. I mean, you see him in his chair, but he was able to communicate up until he passed away.
Dan Rubin
So many, many, many patients who were diagnosed with ALS lose the ability to move their hands completely. Some patients do for reasons we don't understand from a neurologic standpoint, a small amount of function that doesn't go away. And so from a assistive communication standpoint, outside of the realm of BCI, anytime someone has volitional motor control that can be harnessed to control a switch, basically, you can then build on that to develop more sophisticated communication. And if it's a single movement, then you've got a binary switch, and there's different systems that are devised with letters that sort of scroll by automatically, and you indicate which one you want to select a letter or menus that nest into other menus. And there's different clever ways of trying to make the most efficient and usable system for someone and sort of meet them where they are based on what sort of motor function they've got.
Raymond Yin
Well, that's it for today. We've only scratched the surface in exploring Brain Computer interfaces. Tune into our next episode as Dr. Rubin and I discuss the different communication and mobility methods used in brain computer interface research today. In the meantime, explore more content from Mouser's Empowering Innovation Together series on BCIs by visiting mouser.com/empowering-innovation.