Press "Enter" to skip to content

Brain implant may enable communication from your thoughts

A team of Duke neuroscientists, neurosurgeons, and engineers have put together a speech prosthetic that can translate a person’s brain signals into words they want to say. 

This new technology was introduced in the journal Nature Communications on November 6, and may one day help people who are unable to talk regain their ability to communicate through “brain-computer interfaces.”

Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine noted that, “There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak, but the current tools available to allow them to communicate are generally very slow and cumbersome.”

In the past, the best speech decoding rate available were devices that could decipher 78 words per minute. On average, people speak roughly 150 words per minute. Even with the best technology, the lag would cause a drag in conversation and prove relatively unuseful for patients with speech or motor disorders. This delay in information was largely due to the fact that few brain activity sensors could be fused onto the thin piece of material that lies on the surface of the brain. Fewer sensors available to track brain activity meant for less output received to decode. 

The team from Duke University aimed to get past these limitations by making “high-density, ultra-thing, and flexible brain sensors.” Cogan worked with Jonathan Viventi, Ph.D. from Duke Institute for Brain Sciences in order to fit an impressive 256 microscopic brain sensors onto the postal-stamp sized piece of plastic. While 256 sensors may seem like a lot; the reality is that there should be even more — neurons that are right next to each other have varying activity patterns when coordinating speech, so it is necessary to to have sensors to distinguish between signals of neighboring brain cells in order to make accurate predictions on the words that are going to be spoken. 

Once the product was finalized, Cogan and Viventi worked with several neurosurgeons from Duke University Hospital, who recruited four patients to test the implants. The experiment involved temporarily placing the device in patients undergoing brain surgery for another disease, such as Parkinson’s disease. The experiment was relatively simple, however, time was of the essence. 

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

Participants heard a series of skewed words, such as “kug” or “vip” and then spoke each one out loud. The implant recorded neural and speech data from the speech motor cortex, which was then fed into a machine learning algorithm to see how accurately sounds were predicted. 

It was noted that the decoder got predictions right 84% of the time when the first sound of the skewed word was different from the rest of the word — like “g” in “gak.” However, accuracy dropped when two sounds were similar, like “p” and “b.” Overall, the decoder was accurate 40% of the time, which is actually quite impressive given that many decoders often require days of data and this decoder required a 15 minute test. 

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

The team has just received a 2.4 million dollar grant from the National Institute of Health to continue their project.