Bridging minds and machines Episode 3
Raymond Yin
Willl Brain Computer interfaces be capable of augmenting human abilities? Do we want them to, what are the ethical implications surrounding brain computer implants? We wrap up our conversation with Dr. Rubin in this exclusive podcast episode for Mouser subscribers.
Where do you see BCI and from a research standpoint or a commercialization standpoint, say 10, 20 years from now? I mean, have we eradicated communication issues for paralysis patients and things like that?
Dan Rubin
Boy, what a great question! So no, we haven't eradicated communication issues. I think we can just, that's sort of just the state of things today. Where are we going to be in 10 and 20 years? So, there's a couple of things that are going to happen along the way. So, the first, so everything that's happened in the BCI space for the past 20 years has almost entirely been driven by academic research labs. And it's really only in the past couple of years that startups have said, man, this looks promising. And these academic groups have to sort of - to borrow industry terminology - have de-risked the prospect of trying to make this into an approved medical device. And so, there are a couple of startups that are either just starting or about to start clinical trials of devices that will ultimately, I hope, get FDA approval.
For medical devices … the clinical trials, people are often familiar with phase one, phase two, phase three trials, which generally refers to drugs and other small molecules. For medical devices, it's often feasibility studies, pilot studies as opposed to pivotal studies. There's one company that's going to be starting a pivotal study soon. There's a couple of other companies that have started early feasibility and pilot studies, so I think there's going to be FDA approval for this technology -specifically indicated for people with severe paralysis probably within five years.
I think within five years someone who's got severe paralysis from a spinal cord injury or brainstem stroke or ALS will probably be able to go to their neurologist or neurosurgeon or physiatrist and someone will say, you would benefit from a BCI to help with communication. I'm going to refer you to this specialist. And they're going to have this placed in much the same way as someone who needs a pacemaker goes and sees a cardiologist and they say, you've got a bad rhythm, you need to get a pacemaker. And then you go, and you get it. That's in the short term. And that's the medical device route for implanted technology that is going to, I think, stay within the medical space for the foreseeable future because of the costs. I don't think this is going to be something that's going to become accessible to people without insurance. And also because of the risks associated with having a device placed.
At the same time, the consumer grade BCIs, so wearables. So, both EEG based devices and there's also, they get sort of lumped in with BCIs and I guess they are, depending on how strict you want to be with the definition, but wearables that pick up on the electrical activity of muscle movement. So, things that use EMG, electromyography sensors and those, so wearable EMG and EEG based devices. These are consumer technology. So, these are being priced at a place where people will be able to buy them - if you're not going to use your health insurance to get these. And those are already on the market.
Raymond Yin
Yep, I've seen some.
Dan Rubin
And there's some that - there's headphones that adjust to your mood to try and promote wellness. There are systems that are integrated with VR to try and use brain signals to sort of enhance the VR experience. And these I think are going to keep getting better because computers keep getting better and our ability to decode things keep getting better. And although I don't think that these are going to be a means of viable communication for people with advanced paralysis in the short term, I think that they may get there in the longer term. And I think, in particular, it has become more widely adopted because these are going to be consumer grade, there's going to be a big population of people who are using these and hopefully that's sort of grassroots development that makes so much of technology fun to work with. So that's the short term.
Longer term, you asked about the 10-to-20-year range. So here, I think it's for the implantable devices, there's going to be, once the first rounded devices become available and approved for use, there's going to be research groups who are then going to be looking at other indications. And so again, one of the first ones will probably be things like cortical or hemispheric strokes. These are people who have severe paralysis from neurologic injury. So, a natural population to consider working with the other group that I think, or the other set of conditions that I think a lot of the startups in particular are interested in investigating is various mental health conditions.
Raymond Yin
Oh, interesting.
Dan Rubin
And so, this gets into a different space. This gets into something sometimes called a neuromodulation, but the idea is not only would you be recording the electrical activity of the brain, but you'd be trying to do something to change the signal to try and make it look different.
Raymond Yin
So, you would actually have an input into the brain as opposed to just a passive recording.
Dan Rubin
That's exactly right. And so, there's one currently approved device on the market that has that model, although it actually has more in common with cardiac defibrillators than it does with other BCIs. And that's the NeuroPace, an epilepsy treatment device. So, this is an FDA approved device. This is something that people with severe epilepsy could get referred to have placed by a neurosurgeon, but it has two components. So, it has a sensor that's measuring the neural activity, and then it has a separate set of electrodes that are stimulating electrodes.
Raymond Yin
Ok
Dan Rubin
And the way that it works is that it monitors the brain's activity. And if there is a pattern that is detected that looks like the start of a seizure or it looks like the activity that immediately proceeds the seizure, it then gets triggered to automatically deliver a set of impulses to try and terminate that seizure before it starts.
Raymond Yin
Okay. So almost like 180 degree out-of-phase wave to come in and just damp everything down.
Dan Rubin
Exactly right. You could call it sort of a bi-directional BCI, in a sense. Something that's both recording and stimulating at the same time. Now this is to prevent a seizure. And so, it's monitoring the electrical activity, but it's not like other BCI technology we've been discussing in that you're not actually trying to extract information from that electrical activity. You're detecting an abnormal pattern, but you're not trying to figure out what someone is trying to do. You're not trying to understand the semantic content of that brain activity. And so, with the notion of a sort of bi-directional BCI, for example, for mental health - there, you'd be trying to read out some neural correlate of whatever the person's specific mood disorder may be. So, it may be someone who has refractory depression, it could be someone with excessive compulsive disorder. There, you're going to be trying to pull out some information from the signal and then figure out in a much more nuanced way how you would alter it. I mean, the sort of stimulation that you deliver to stop a seizure from happening is going to certainly look very different than the kind of modulation you'd provide to potentially treat someone's depression.
Raymond Yin
Right. Okay. Interesting. I mean, yeah, like I said, I know we've talked on this podcast about using digital therapies for mental health. And so now this would almost be like a surgical technique to do the same thing.
Dan Rubin
Pretty much. And there is precedence for this. So, there are so deep brain stimulators, again, another approved neuro implanted neurotechnology and deep brain stimulators are currently used mostly in Parkinson's to help people who have really severe tremor. But they're also used, with increasing frequency, for treating people who have obsessive compulsive disorder. You have a stimulating electrode in the right part of the brain, you can actually relieve the symptoms of OCD for some people.
Raymond Yin
Oh, interesting.
Dan Rubin
To sort of qualify this, it needs to be very severe, debilitating OCD because you're having a device placed, but it's approved. And for some people it seems to help, but even in these cases, the stimulation that's delivered is very non-specific. So, with current deep brain stimulators, you program in a frequency and amplitude and a pulse width, and it just goes.
Raymond Yin
Right.
Dan Rubin
It's not listening to the brain's activity to decide when to deliver a signal and what sort of signal to deliver. But the hope is that with more sophisticated sensors that we would be able to actually understand something about the brain activity that indicates when this circuit that controls this particular mood disorder is working the way it's supposed to or not. That the signal we deliver appropriately modulates it in the right direction.
Raymond Yin
Interesting! Dan, so far, we've talked about giving a patient back an ability that they once had - improving their quality of life, whether it be communication, mobility, but as you know, science fiction is full of stories with brain implants. Yeah, I can see you smiling and shaking your head. Where you're giving somebody an ability that they didn't have or enhancing ability through, whether it be like, “Hey, I'm going to attach another terabyte of storage to somebody's brain,” or something like that. Is that even possible? If it is possible, how would that look?
Dan Rubin
That's a great, yeah, we get that a lot.
Raymond Yin
I'll bet
Dan Rubin
The picture that I always conjure is back from The Matrix when Keanu Reeves doesn't know kung fu and then Morpheus does something and then he says, now I know kung fu.
Raymond Yin
Right.
Dan Rubin
And I think that's the model a lot of people think of. Honestly, it's a little hard to imagine exactly how or if that could ever work. This notion of BCIs for enhancement, it's challenging because we don't really … this talk of bi-directional BCI…
Raymond Yin
Right
Dan Rubin
So, inherent in that would be this notion that you'd be able to stimulate the brain in such a way that certain patterns of activity are present that weren't there before. And those were informative of something. At the moment, this has only really been done in an intelligent way, in a very simple example. There is a research group that's one of the best BCI research groups in the world at the University of Pittsburgh. And they were the first group to place sensors not only in motor cortex but also in sensory cortex.
And so, they had a clinical trial participant who was using a controlling and robotic arm. And there's a famous video of this clinical trial participant giving President Obama a fist bump, actually. But the way that they said it worked is the sensors that were in motor cortex were giving him the ability to control a robotic arm, which is something that a number of groups have done, and it's a major research goal for many. But the sensor in sensory cortex was a stimulating electrode. And so anytime that the robotic arm gripped something or touched something, that would trigger an impulse to be sent into the sensory cortex. And the participant reported that they could basically feel when the robotic arm was making contact with things. They described it, not as a normal or natural sensation, but they could feel it.
And when sort of paired with the visual of actually seeing the thing touch it, it felt, although not natural, it felt intuitive. It felt like something they could work with. And what the researchers who did this clinical trial showed was that when they had sensation turned on, the person's performance, their ability to use the robotic arm was markedly enhanced. And so, if you timed how quickly they could complete, put some pegs into a peg board kind of task, and you time, how quickly can you get nine pegs into this peg board without sensation turned on, it was of a certain speed, and you turned on the sensory feedback and they could go a lot faster.
And of course, you can intuit that if you were to be asked to do something with your hands and your hand was completely numb, it would just be harder. So that's a sort of a simple example, and probably the most comprehensive example of an actual bi-directional BCI. To extrapolate from that to injecting knowledge to…
Raymond Yin
Memories or something.
Dan Rubin
Memories or skills that weren't there. I mean, before we could do that, we would first have to understand how the brain even encodes them in an endogenous sense. We don't know. Although you remember your third-grade teacher's name, and I remember my third-grade teacher's name. We have no idea where in our brain that is stored or how. The prevailing theory is that it's sort of stored kind of everywhere and through some high dimensional pattern of activity that you'd have to know the simultaneous activity of all 88 billion neurons in your brain to convincingly identify.
Raymond Yin
Right. It sounds like a job for quantum computing.
Dan Rubin
Potentially.
Raymond Yin
Yeah. Never know. So interesting. So yeah, so pretty much out of the question then, at least for the foreseeable future.
I recently read that in Colorado, in California, they specifically added privacy protections to brain scan data. And you alluded to that earlier in your clinical patients where you didn't want to transmit their brain data over to the cloud for processing and things like that. Short of someone literally forcibly implanted, what sort of ethical issues are there with BCIs?
Dan Rubin
Yeah, this is something we grapple with a lot in our group, and we discuss openly with the other BCI groups that are in the community. And what it really comes down to is, we don't know what we don't know, I guess is the best way to put it, right? So, people say, can you share the neural data? It's really of great value. Neuroscientists all over the world would love the opportunity to look at neural data from human research participants. This is a pretty rare research setup that we've got and a pretty unique one. And for all the reasons that I described earlier - of great value to people who are doing basic neuroscience to have the sort of tasks that we're getting and the resolution of the data that we've got, and they say, well, why can't you just share the data? You can fully anonymize it and we just be looking at voltages on a screen. There should be no problem with that. There's nothing identifiable about voltages on a screen.
The challenge is we don't know what is or is not actually encoded in that data yet. Because as we find out, as every year goes by and we do research with it, we're discovering that we can decode more from that data than we could the year prior. And so if we were to take all of the neural data and just make it publicly available - 10 years from now, maybe someone's developed an algorithm, they can go back and they can actually figure out where that data's from or who that data's from just by looking at patterns of activity and say, oh, this person must've been thinking about these things, or maybe they were taking these medications.
All of these different things. We just don't know the full extent of the information that's encoded within the brain signals. And so, I think having early protections in place to keep that safe and to recognizing that it is as uniquely identifiable, potentially as uniquely identifiable, as any other data that's gathered in the course of a clinical trial, I think is important. Because again, we just don't know.
Raymond Yin
Yeah. Like you said, we just don't know what we don't know. And once again, I remember, wow, years ago there was a movie where somebody recorded, did a brain scan of themselves while they were dreaming, and that was actually presented as evidence in a trial what they were dreaming. Will that ever be possible? Their dream interpretation is kind of this weird pseudoscience, but if we're able to record data during dreaming and like your patient earlier with the Simon, we will that be able to be played back somewhere as a video, as VR or anything like that?
Dan Rubin
It would be difficult to do that without someone's cooperation, I guess would be the easiest way to put it. But we can already… so there are some great research groups who have, I mean, this is outside of the world of BCI because it's sensory decoding, but within neuroscience, there are groups that have shown that if you measure brain activity as someone is viewing an image viewing, you can recreate the images that they're seeing. You can record from visual cortex and then basically reconstruct whatever visual scene it was that they were looking at. So, the notion of doing that while someone's dreaming and playing back a visual representation of their dream, I think is probably actually within, I wouldn't call that science fiction at this point. I think it's probably feasible. Now the question of course is how do you tune your model? So that's where you need the person's cooperation. So, you're able to do this when you record from that person's visual cortex as they watch lots and lots and lots of different visual scenes, then you've trained a model because you need to know the map between their brain activity and what they're seeing.
Raymond Yin
Right, their interpretation.
Dan Rubin
Exactly. So, if you've trained a model on someone's own neural data long enough, then you can show them a novel visual scene and reconstruct what they were looking at because you know that model. So, I don't think there's a short-term route to being able to scan someone's brain who you've never seen before and reconstruct that they were looking at. Well, that would require sort of a more generalist model of neural decoding. And this is one of the areas of active investigation to some extent, sort of a holy grail, this notion of transfer learning. If you trained a BCI decoder on enough neural data from lots of other people and then had someone who had never used a BCI and record from them for the first time, could you decode something meaningful? And so far, the answer has been no. You seem to be able to reduce the time it takes to train a decoder. So having a pre-trained model gets your model in the right space to speed up tuning, that's probably, it finds the right range of hyper parameters to start doing the fine tuning. But we haven't really gotten to a place where you could have, again, a general purpose BCI decoder for all people.
Raymond Yin
Right? Yeah. And like you mentioned, the data coming off somebody's brain is probably uniquely theirs, but given a big enough sample size?
Dan Rubin
Maybe, and this is where people are trying to learn from the large language models,
Raymond Yin
Right? Yep, exactly.
Dan Rubin
So, I like to think that the way that I speak is unique to me, but as it turns out, if you train a model on everyone who's ever written anything on the internet, and then you have it try to predict what I say, it tends to get things right. At least if I give it the first half of my sentence, Gmail seems to know what I’m going to say there, more often than not.
Raymond Yin
And for me, I've been thinking, okay, that's kind of cool, but it's also, maybe that may not be a good thing to know exactly what I'm thinking on the back half.
Dan Rubin
There's that, and there's also this notion, and this comes up within the space of BCIs. So, everything that I've talked about, large language models and multilayer decoders, I mean, these are all types of AI, right?
Raymond Yin
Right. Yep, yep.
Dan Rubin
It’s semantics, really, but we haven't really dug into the generative AI, right? So, the ChatGPT type tools where you just give a little seed and then it generates from whole cloth some concept. So, I've been talking about decoding.
Earlier on, I talked about decoding speech by trying to fit what sounds someone's trying to make, to a map of a particular phoneme, and then mapping the phoneme to larger words based on the context in which they're appearing. The alternative approach would be someone wants to decode a generic greeting. And so, instead you have a BCI that just sort of … alright, they're trying to say hi, and then you have a large language model, produce a, “hi, how are you today?” Right? No, they didn't actually try and sound each one of those words. Your BCI decoded, give a pleasant generic reading, and then you have ChatGPT actually produce the language. The challenge that was there. Then you're really, you converge towards the mean, right? That's what the average generic reading would be. But you risk really sort of losing one's individuality at that point.
And so, it may be the case that that sort of approach to decoding neural data may not work because it's just, it regresses towards the mean, and that's just not the way our brains work.
Raymond Yin
Interesting! Like I said, this has been fascinating for me. I really appreciate all your insight. As somebody who's interested in technology, I've read these things and to understand, to hear from somebody with your level of knowledge on this is just for me, it's just been really eye-opening.
Well, Dan, I want to thank you so much for joining us on The Tech Between Us.
Dan Rubin
My pleasure. Thanks for having me. This has been a blast.
Raymond Yin
Thank you for listening to our subscriber exclusive episode. We hope you'll explore more of Mouser’s Empowering Innovation Together content on brain computer interfaces. Stay tuned as Mouser explores more technologies throughout the year by visiting mouser.com/empowering-innovation.