Mind-Reading AI Tech Translates Brain Activity Into A Voice

mans-head-brain-activity

iStockphoto

Scientists have created a new implant translates the brain activity of a stroke survivor with severe paralysis into nearly instantaneous speech using artificial intelligence (AI). The brain implant uses technology similar to that used by voice assistants Alexa and Siri, according to the scientists.

This groundbreaking research was conducted by a team at University of California-Davis and recently published in the journal Nature. In their paper, the scientists demonstrated how they were able to allow people with paralysis to speak intelligibly and expressively using their brain–computer interface (BCI).

The researchers placed 256 micro-electrodes in the brain of a man with severe dysarthria due to Amyotrophic Lateral Sclerosis (ALS). These micro-electrodes were located in the part of brain that helps control facial muscles used for speaking. They then used an artificial intelligence (AI) model that was trained to associate specific patterns of brain activity with words and inflections to align with the speech sounds the participant was trying to produce at that moment.

Using this system, the scientists wrote, “The resulting synthesized voice was often (but not consistently) intelligible and human listeners were able to identify the words with high accuracy.”

“The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak,” Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC-Davis, said in a statement. “Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice.”

The brain-computer interface was able to translate the study participant’s neural signals into audible speech played through a speaker very quickly — one-fortieth of a second. This short delay is similar to the delay a person experiences when they speak and hear the sound of their own voice.

The technology also allowed the participant to say new words (words not already known to the system) and to make interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a sentence.

The participant also took steps toward varying pitch by singing simple, short melodies.

“Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,” said David Brandman, co-director of the UC-Davis Neuroprosthetics Lab and the neurosurgeon who inserted the patient’s brain implant. “The results of this research provide hope for people who want to talk but can’t. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis.”


Content shared from brobible.com.

Share This Article