Breaking the Silence: Giving Voice to the Silent Through Thought – Neuroscience News


Summary: The researchers allowed a silent person to produce speech using only thought. Electrodes placed deep in the participant’s brain transmitted electrical signals to a computer, which then vocalized imaginary syllables.

This technology offers hope to paralyzed people to regain speech. This study marks an important step towards brain-computer interfaces for voluntary communication.

Highlights:

  1. Technology: Depth electrodes transmit brain signals to a computer for speech.
  2. Participants: The experiment involved an epileptic patient with implanted electrodes.
  3. Future impact: Could allow paralyzed people to communicate through thought.

Source: Tel Aviv University

A scientific breakthrough by researchers from Tel Aviv University and Tel Aviv Sourasky Medical Center (Ichilov Hospital) has demonstrated the speech potential of a silent person using only the power of thought.

In one experiment, a silent participant imagined saying one of two syllables. Depth electrodes implanted in his brain transmitted the electrical signals to a computer, which then vocalized the syllables.

The study was led by Dr. Ariel Tankus of Tel Aviv University School of Medical and Health Sciences and Tel Aviv Sourasky Medical Center (Ichilov Hospital), and Dr. Ido Strauss of Tel Aviv University School of Medical and Health Sciences and director of the Functional Neurosurgery Unit at Ichilov Hospital.

The results of this study were published in the journal Neurosurgery.

These findings offer hope that people who are completely paralyzed – due to diseases such as ALS, stroke or brain injury – may regain the ability to speak voluntarily.

“The patient in the study is an epileptic patient who was hospitalized to undergo resection of the epileptic focus in his brain,” explains Dr. Tankus. “To do this, of course, you have to locate the focal point, which is the source of the ‘short circuit’ that sends powerful electrical waves through the brain.

“This situation affects a smaller subset of epilepsy patients who do not respond well to medication and require neurosurgical intervention, and an even smaller subset of epilepsy patients whose suspected focus is located deep in the brain, rather than on the surface of the cortex.

“To identify the exact location, electrodes must be implanted into the deep structures of their brain. They are then hospitalized, waiting for the next crisis.

“When a seizure occurs, the electrodes tell neurologists and neurosurgeons where the target is, allowing them to operate with precision. From a scientific perspective, this offers a rare opportunity to get a glimpse into the depths of a living human brain.

“Fortunately, the epileptic patient hospitalized in Ichilov agreed to participate in the experiment, which could eventually help completely paralyzed people to express themselves again through artificial speech.”

In the first stage of the experiment, with the depth electrodes already implanted in the patient’s brain, the Tel Aviv University researchers asked him to say two syllables out loud: /a/ and /e/.

They recorded brain activity as the individual said these sounds. Using deep learning and machine learning, the researchers trained artificial intelligence models to identify specific brain cells whose electrical activity indicated the desire to say /a/ or /e/.

Once the computer learned to recognize the pattern of electrical activity associated with these two syllables in the patient’s brain, it was asked to imagine saying /a/ and /e/. The computer then translated the electrical signals and played the prerecorded sounds of /a/ or /e/ accordingly.

“My research interests include speech encoding and decoding, which is how individual brain cells participate in the speech process: producing speech, hearing speech, and imagining speech, or ‘speaking silently,’” says Dr. Tankus.

“In this experiment, for the first time in history, we were able to link parts of speech to the activity of individual cells in the brain regions from which we recorded.

“This allowed us to distinguish the electrical signals that represent the sounds /a/ and /e/. Currently, our research focuses on two constituent elements of speech, two syllables.

“Our ambition is of course to achieve complete speech, but two different syllables are already enough to enable a completely paralysed person to say ‘yes’ and ‘no’. In the future, for example, it will be possible to train a computer for an ALS patient in the early stages of the disease, when he or she is still able to speak.

“The computer would learn to recognize electrical signals in the patient’s brain, allowing it to interpret those signals even after the patient has lost the ability to move his muscles. And that’s just one example.”

“Our study represents an important step towards the development of a brain-computer interface capable of replacing the brain’s control pathways for speech production, thereby enabling completely paralyzed individuals to communicate voluntarily with their environment again.”

About this news on BCI and neurotechnology research

Author: Ariel Tankus
Source: Tel Aviv University
Contact: Ariel Tankus – Tel Aviv University
Picture: Image credited to Neuroscience News

Original research: Access closed.
“A speech neuroprosthesis in the frontal lobe and hippocampus: decoding high-frequency activity into phonemes” by Ariel Tankus et al. Neurosurgery


Abstract

A speech neuroprosthesis in the frontal lobe and hippocampus: decoding high-frequency activity into phonemes

CONTEXT AND OBJECTIVES:

Speech loss due to injury or disease is devastating. Here we present a novel speech neuroprosthesis that artificially articulates the building blocks of speech based on high-frequency activity in brain areas never before exploited for neuroprosthetics: the anterior cingulate and orbitofrontal cortexes, and the hippocampus.

METHODS:

A 37-year-old neurosurgical epileptic patient with intact speech, implanted with depth electrodes for clinical reasons only, silently controlled the neuroprosthesis almost immediately and naturally to voluntarily produce 2 vowels.

RESULTS:

In the first series of trials, the participant had the neuroprosthesis artificially produce the different vowel sounds with an accuracy of 85%. In subsequent trials, performance improved steadily, which can be attributed to neuroplasticity. We show that a neuroprosthesis trained on explicit speech data can be controlled silently.

CONCLUSION:

These results could pave the way for a new strategy of neuroprosthesis implantation at early stages of the disease (e.g., amyotrophic lateral sclerosis), while speech is intact, for better training that still allows silent control at later stages. The results demonstrate the clinical feasibility of direct decoding of high-frequency activity that includes spiking activity in the aforementioned areas for silent phoneme production that can serve as part of a neuroprosthesis to replace lost speech control pathways.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top