Medical News Mind-reading device uses AI to turn brainwaves into audible speech

Medical News Mind-reading device uses AI to turn brainwaves into audible speech

by Emily Smith
0 comments 102 views
A+A-
Reset

Medical News
Signals from the brain can be converted into sounds by a computerMopic/Alamy
By Chelsea WhyteElectrodes on the brain have been used to translate brainwaves into words spoken by a computer – which could be useful in the future to help people who have lost the ability to speak.
When you speak, your brain sends signals from the motor cortex to the muscles in your jaw, lips and larynx to coordinate their movement and produce a sound.
“The brain translates the thoughts of what you want to say into movements of the vocal tract, and that’s what we’re trying to decode,” says Edward Chang at the University of California San Francisco (UCSF). He and his colleagues created a two-step process to decode those thoughts using an array of electrodes surgically placed onto the part of the brain that controls movement, and a computer simulation of a vocal tract to reproduce the sounds of speech.

Advertisement

In their study, they worked with five participants who had electrodes on the surface of their motor cortex as a part of their treatment for epilepsy. These people were asked to read 101 sentences aloud – which contained words and phrases that covered all the sounds in English – while the team recorded the signals sent from the motor cortex during speech.
There are about 100 muscles used to produce speech, and they are controlled by a combination of neurons firing at once, so it’s not as simple as mapping signals from one electrode to one muscle to sort out what the brain is telling the mouth to do. So, the team trained an algorithm to reproduce the sound of a spoken word from the collection of signals sent to the lips, jaw and tongue.
Electrodes like this were used to record brain activityUCSF
The team says “robust performance” was possible when training the device on just 25 minutes of speech, but the decoder improved with more data. For this study, they trained the decoder on each participant’s spoken language to produce audio from their brain signals.
Once they had generated audio files based on the signals, the team asked hundreds of native English speakers to listen to the output sentences and identify the words from a set of 10, 25 or 50 choices.
The listeners transcribed 43 per cent of the trials perfectly when they had 25 words to choose from, and 21 per cent perfectly when they had 50 choices. One listener provided a perfect transcription for 82 sentences with the smaller word list and 60 with the larger.
“Many of the mistaken words were similar in meaning to the sound of the original word – rodent for rabbit – therefore we found in many cases the gist of the sentence was able to be understood,” says team-member Josh Chartier at UCSF. He says the artificial neural network did well at decoding fricatives – sounds like the ‘sh’ in ‘ship’ – but had a harder time with plosives, such as the ‘b’ sound in ‘bob’.

“It’s intelligible enough if you have some choice, but if you don’t have those choices, it might not be,” says Marc Slutzky at Northwestern University in Illinois. “To be fair, for an ultimate clinical application in a paralysed patient, if they can’t say anything, even having a vocabulary of a few hundred words could be a huge advance.”
That may be possible in the future, he says, as the team showed that an algorithm trained on one person’s speech output could be used to decode words from another participant.
The team also asked one person to mimic speech by moving their mouth without making any sounds. The system did not work as well as it did with spoken words, but they were still able to decode some intelligible speech from the mimed words.
Similar devices have been created that attempt to decode brain signals directly into sound, skipping the simulation of motion around the mouth and vocal tract, but it’s still unclear which approach is most effective.
This device doesn’t rely on signals for creating sound, but just on those for control motor functions, which are still sent even if someone is paralysed. So, this device could be useful for people who once were able to speak but lost that ability due to surgery or motor disorders like ALS, in which people lose control of their muscles.
Journal reference: Nature , DOI: 10.1038/s41586-019-1119-1

More on these topics:
medical technology

technology

brain

Read More

You may also like

Leave a Comment