Purpose of the articles posted in the blog is to share knowledge and occurring events for ecology and biodiversity conservation and protection whereas biology will be human’s security. Remember, these are meant to be conversation starters, not mere broadcasts :) so I kindly request and would vastly prefer that you share your comments and thoughts on the blog-version of this Focus on Arts and Ecology (all its past + present + future).

Premium Blogger Themes - Starting From $10
#Post Title #Post Title #Post Title

Brain-reading devices allow paralysed people to talk using their thoughts

Two studies report considerable improvements in technologies designed to help people with facial paralysis to communicate. 

A brain-computer interface translates the study participant’s brain signals into the speech and facial movements of an animated avatar.Credit: Noah Berger

Brain-reading implants enhanced using artificial intelligence (AI) have enabled two people with paralysis to communicate with unprecedented accuracy and speed.

In separate studies, both published on 23 August in Nature1,2, two teams of researchers describe brain–computer interfaces (BCIs) that translate neural signals into text or words spoken by a synthetic voice. The BCIs can decode speech at 62 words per minute and 78 words per minute, respectively. Natural conversation happens at around 160 words per minute, but the new technologies are both faster than any previous attempts.

“It is now possible to imagine a future where we can restore fluid conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” said Francis Willett, a neuroscientist at Stanford University in California who co-authored one of the papers1, in a press conference on 22 August.

These devices “could be products in the very near future”, says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands.

Electrodes and algorithms

Willett and his colleagues developed a BCI to interpret neural activity at the cellular level and translate it into text. They worked with 67-year-old Pat Bennett, who has motor neuron disease, also known as amyotrophic lateral sclerosis — a condition that causes a progressive loss of muscle control, resulting in difficulties moving and speaking.

First, the researchers operated on Bennett to insert arrays of small silicon electrodes into parts of the brain that are involved in speech, a few millimetres beneath the surface. Then they trained deep-learning algorithms to recognize the unique signals in Bennett’s brain when she attempted to speak various phrases using a large vocabulary set of 125,000 words and a small vocabulary set of 50 words. The AI decodes words from phonemes — the subunits of speech that form spoken words. For the 50-word vocabulary, the BCI worked 3.4 times faster than an earlier BCI developed by the same team3 and achieved a 9.1% word-error rate. The error rate rose to 23.8% for the 125,000-word vocabulary. “About three in every four words are deciphered correctly,” Willett told the press conference.

“For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships,” said Bennett in a statement to reporters.

Reading brain activity

In a separate study2, Edward Chang, a neurosurgeon at the University of California, San Francisco, and his colleagues worked with a 47-year-old woman named Ann, who lost her ability to speak after a brainstem stroke 18 years ago.

They used a different approach from that of Willett’s team, placing a paper-thin rectangle containing 253 electrodes on the surface on the brain’s cortex. The technique, called electrocorticography (ECoG), is considered less invasive and can record the combined activity of thousands of neurons at the same time. The team trained AI algorithms to recognize patterns in Ann’s brain activity associated with her attempts to speak 249 sentences using a 1,024-word vocabulary. The device produced 78 words per minute with a median word-error rate of 25.5%.

Although the implants used by Willett’s team, which capture neural activity more precisely, outperformed this on larger vocabularies, it is “nice to see that with ECoG, it's possible to achieve low word-error rate”, says Blaise Yvert, a neurotechnology researcher at the Grenoble Institute of Neuroscience in France.

Credit: Chang Lab

Chang and his team also created customized algorithms to convert Ann’s brain signals into a synthetic voice and an animated avatar that mimics facial expressions. They personalized the voice to sound like Ann’s before her injury, by training it on recordings from her wedding video.

“The simple fact of hearing a voice similar to your own is emotional,” Ann told the researchers in a feedback session after the study. “When I had the ability to talk for myself was huge!”

“Voice is a really important part of our identity. It’s not just about communication, it’s also about who we are,” says Chang.

Clinical applications

Many improvements are needed before the BCIs can be made available for clinical use. “The ideal scenario is for the connection to be cordless,” Ann told researchers. A BCI that was suitable for everyday use would have to be made of a fully implantable system with no visible connectors or cables, adds Yvert. Both teams hope to continue increasing the speed and accuracy of their devices with more-robust decoding algorithms.

Furthermore, the participants of both studies still have the ability to engage their facial muscles when thinking about speaking, and their speech-related brain regions are intact, says Herff. “This will not be the case for every patient.”

“We see this as a proof of concept and just providing motivation for industry people in this space to translate it into a product somebody can actually use,” says Willett.

The devices must also be tested on many more people to prove their reliability. “No matter how elegant and technically sophisticated these data are, we have to understand them in context, in a very measured way,” says Judy Illes, a neuroethics researcher at the University of British Columbia in Vancouver, Canada. “We have to be careful with over-promising wide generalizability to large populations,” she adds. “I’m not sure we’re there yet.”

References

  1. Willett, F. R. et al. Nature https://doi.org/10.1038/s41586-023-06377-x (2023).

    Article Google Scholar 

  2. Metzger, S. L. et al. Nature https://doi.org/10.1038/s41586-023-06443-4 (2023).

    Article Google Scholar 

  3. Willett, F.R. et al. Nature 593, 249–254 (2021).

    Article PubMed Google Scholar

(Sources: Nature)

    Powered By Blogger