Capitol’s Brain Machine Interface Team Joins B2CI Competition



Capitol students participated in the inaugural Brain-to-Computer Interface competition (B2CI) in the North East region on March 2, 2019. The competition was held at the National Electronic Museum and was co-sponsored by IEEE Baltimore and IEEE Brain initiative. Our team was composed of graduate student Mark Horvath, senior Joshua Joseph, and sophomore Gregory Patton. The competition offered three categories to compete in: brain games, brain drone, and an open showcase.

We sat down with the team’s newest member, Gregory Patton to talk about the competition, how one even begins to move a machine with just their thoughts, and what’s next for this extremely exciting research project.

B2CI Team
From left to right, Gregory Patton, Joshua Joseph, Dr. Garima Bajwa, and Mark Horvath

SVH: So Greg, what is your involvement with Capitol’s Brain to Machine Interface team?

GP: Well before I get to that, first I want to just say that we are fortunate enough to be working off of a project developed with our BMI tech by a senior who graduated last year. The main concept for the entire project was developed by Indya Dodson for her senior project. Josh is a senior now, he’s been working on the project since she left along with graduate student Mark Horvath. I just came onto the project this past January so they’ve been showing me the ropes and getting me familiar with how it works, the code involved, the process of hooking the user up to the EEG headset and transferring signals into commands, that sort of thing. I was there almost like as an intern since they were showing me the basics, but it has been really fun to learn.

SVH: I’m really curious about some of the basic concepts for how you drive a drone with your brain. Do you give it specific commands to get it to do stuff or do you think in a specific way?

GP: Sticking to the basics it’s like turning a lamp on and off. We have an on and an off setting and we’re working in computer terms or binary – that’s either a 0 or a 1. Proof of concept for our project was translating brainwave data into commands that will get the drone on and flying.

Simply put, we use a headset to monitor brainwaves to collect data. We are using open BCI hardware but it’s a very large headset. It connects to about 20 points in the brain. It’s also very uncomfortable to wear for long periods of time, which affects the signals that the wearer produces. To fix that, we have a fabricated headband with four electrode receptors.

Josh puts the headset on and we bring up a visual stimulus. So how do we get the signals to the computer, right? That’s the big question. The best and easiest way to do that is to produce a brainwave that’s very plentiful in the human brain.

SVH: Okay, like a strong emotion or feeling?

GP: In this case a visual stimulus. Josh will sit down in front of a computer and it’s a white screen with a black dot. He’ll focus on the black dot and then close his eyes. This naturally creates a meditative state which creates a huge spike in alpha brainwave patterns. This alpha brainwave comes from the occipital lobe in the back of the brain and is created from him going into a meditative state. It’s this alpha spike that has enough data for us to transfer it once it’s fed into the device for us to give it a command.

SVH: How does the data travel from the brainwaves to the drone?

GP: There’s about five levels between the signal and the drone. There’s the headset and the dongle, which is like a microcontroller. Then it goes to the computer and the program transfers those signals into a csv file, basically an excel data file. We have a program written in Python and it tells the drone hey, we started with 0 but now we’ve got this spike in alpha brainwaves. Take that data and we’re going to make that lift the drone and hover at a certain altitude.

SVH: And so you have different pictures for directions that you want it to follow?

GP: That is quite possibly the next step. We have not experimented with that quite yet, but while we were at the competition, we were getting all sorts of suggestions for how we could do this or how we could do that. They were really interesting because we were the only ones using a visual stimulus to induce flight.

SVH: At the B2CI competition what were some of the other teams doing?

B2CI Organizers
2019 Committee Co-Chairs, Garima Bajwa and J. Max McQuighan (AACC) with professor. Missing, co-chair Sherwood Olson (IEEE Balt.)

GP: So Anne Arundel Community College used a different headset by Emotive, which is another company. They’re not open source like open BCI is, however, the technology offers a much more streamlined process. Ours was kind of from scratch. There was no predetermined code or anything, but there’s was a headset that has a few more connections. What they did was the user will, before-hand, work with headset on various parts of the brain and train the software to understand specific motions. Say like it’s a Playstation controller and you’re playing a video game. You spend an amount of time training up direction, down direction, sideways or what have you. While that is again, a much more streamlined process it isn’t done on-site. You have to give the commands ahead of time.

It is quite fascinating to watch though because their student put his headset on and he was able to lift it forward, send it back without anybody ever using a remote control.

I think professor Bajwa wants to possibly experiment with that headset too. Next year we’re going to be doing a wider number of things, but the other school I remember getting a chance to watch was UMBC who had a similar set up to AACC.

SVH: Yeah the visual approach is really interesting. I wouldn’t have thought of that but it makes a lot of sense.

GP: And, with our headband there’s four electrodes connected but right now we’re only using two of them because that’s where Indya left off with the project. There’s another two electrodes that we haven’t programmed for any commands but they can be programmed for other movement.

You asked a very interesting question though about how could we get different commands in there because right now we just have up, down, hover and then land.

SVH: Which is still impressive! Especially given that no one’s touching the machine at all.

GP: Yes, it is! But we do want to get to the next level and to do more we would need more sensory data to feed to the program to create new commands, so I asked Dr. Bajwa what if we used auditory stimuli? And because we’re working in frequencies which are measured in Hertz, you might be familiar with binaural beats where you can listen to some of your favorite songs at a different frequency. That’s what I was thinking. So, using that as a conduit for new directional commands.

Some of the judges actually told us that since we’re using visual stimuli already, why don’t you guys have other visual stimuli on your screen that are flashing at these frequencies?

SVH: I’m thinking of music; you really could evoke a certain way of thinking depending on what you play for somebody.

GP: You had another good point that I asked Dr. Bajwa, well what if we compounded this with other sensory receptors? We’re using EEG signals, but what if we connected something like heart monitors or how could we use someone’s emotions to translate into a command? I feel like there’s a lot of different ways we could go with it.

I want to continue with using individual stimulus because the judges were impressed with that direction.

By the way, while we were at the competition, we had a fully charged battery all ready to go – he laughs.

SVH: I sense that something may have gone awry?

GP:  Yes, you know... Murphy’s Law was in effect. Our batteries were fine but we came to find out that our charger was faulty so our fully charged batteries were discharging completely into this faulty charger and when we set up our drone it was inoperable! We had no power!

We were scrambling to figure out how can we prove our concept with no power. That’s actually another reason that we’re working to shift the technology we’re using. The drone that we’re using is called a Parrot AR Drone 2.0 and it was a donation to the university, but they’re no longer being produced so we can’t just run off to the store and buy a new battery. No one else had any batteries so our only option was, because of the complexity of how the EEG signals were processed and transformed into commands, we were able to show them that without the drone and they were still impressed. They said you know; this is a really fascinating approach to doing this if only you had your battery so we could actually see it.

This made me realize, this IS really interesting. We’ve got AACC with this streamlined process, but our fabricated nuts and bolts version was very creative and unique. I would like to continue working on it.

SVH: Is this something that you guys plan to work on every semester?

GP: Yes. I think there’s going to be a drone race every spring, so we’ll be working towards next spring’s race and annual B2CI competition.

Student and professor in lab

SVH: It’s interesting because you almost need a little bit of psychology background in order to program the technology. You have to know a bit about the different sections of the brain and even about how the human body works, not just how the technology is working.

GP: Yeah, and what I tell people when I’m talking about the project is that the drone races are really more of a platform for student interest, but the applications are biological, psychological, endless. This technology is currently being worked with by Hopkins where they are doing prosthetics.

When Indya wrote her paper she mentioned that the implications for this technology are vastly relevant to the medical field. We have veterans that have had amputations that could have a prosthetic that’s tuned into their brain pathways and can be moved. Home automation for the elderly and just everyday automations that could be applied to it.

I’m also imagining being able to play some of my video games without any controller. Instantly being able to communicate a command to a software like that is pretty cool.

SVH: We’re moving closer and closer to the Star Trek holo deck.

GP: Yes! Fingers crossed, I hope so.

SVH: So what’s next for the BMI team besides prepping for the next drone race?

GP: So my project for the end of the semester is to use a different headset, just for proof of concept; but we’re going to control two smaller ground-based hexapods – they’re like six legged robots. The idea is that the more we can control at the same time, like if we can get five at once, and we’ve got five fingers, we’re going to try and transfer that application to a robotic hand. We want to join that side of the research as well.

SVH: That is really cool.

GP: And the last really cool thing is that between Dr. Bajwa, Dr. Pittman, Mark, Josh, and myself we are writing a research paper on the subject in its entirety. One of the key elements we’ll be focusing on is how can we secure someone’s EEG signals. Biometrically like you have a retinal scan or a fingerprint lock, what are the applications of someone’s brainwave patterns and as far as their identity, can they be isolated per person and secured? Or could someone do something maliciously in order to acquire someone’s signals? What are the implications of that? That’s another area that’s going to be really interesting.

We had the high school students in a little while ago and we were telling them about the project and they were like, how are you going to do that? And we said, we don’t know yet! But that is the fun.