Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Brain tech breakthrough restores ALS patient’s ability to speak

Last August, Casey Harrell spoke the first clear words his five-year-old daughter could remember hearing him say, a repeat of his wedding vows to her mother, Levana Saxon. The adults in the room cried.
The moment was possible thanks to a wave of innovation in one of the most challenging areas of medicine: reconnecting the brain to the body once something – an accident or an illness – has severed the ties. While Elon Musk’s Neuralink gets most of the attention and investor money in the space, academic labs and rival start-ups are notching significant advances in repairing that broken bond.
“I am using this in a very practical way, right now,” said Harrell (46), who was diagnosed with amyotrophic lateral sclerosis (ALS) in 2019 and lost his ability to speak clearly a few years later. “This is real life for me,” he said, choking up while describing the day he was able to communicate with his own voice again.
His ability to converse stems from 256 tiny electrodes that researchers from the University of California, Davis, implanted in his brain in an almost five-hour surgical procedure last summer. While the technology known as a brain-computer interface (BCI) is often aimed at restoring movement, the improvement in Harrell’s speech detailed in the New England Journal of Medicine underscores its broader promise.
Start-ups are circling as breakthroughs reveal the complex signals the nervous system uses to control the lips, jaws, tongue and larynx, and advances in artificial intelligence allow it to decode them to restore communication. Much like a prosthesis that replaces a missing limb, the field has its own nomenclature for the device: a speech neuroprosthesis.
Initially, “the idea of restoring speech seemed unattainable given the complexity of language”, wrote Edward Chang, chairman of neurological surgery at the University of California, San Francisco, in an editorial that accompanied the study. “Over the past decade, the concept of a speech neuroprosthesis has gone from science fiction to reality.”
It’s still early. The technology is expensive and bulky, requiring computers in Harrell’s home. It’s slow, helping him speak at 33 words minute, far below the 160 words a minute in natural speech. And the long-term performance remains unknown: Dutch researchers detailed the slow deterioration of a similar device used for seven years by a woman with severe paralysis from ALS in the same edition of the journal.
Harrell learned he had ALS, also known as Lou Gehrig’s disease, shortly after his daughter’s birth. The disorder destroys nerve cells in the brain and the spinal cord, leaving patients unable to control their muscles. His natural speech sounds like grunting or babbling to the untrained ear, even as his mind remains sharp.
Yet these days he is easily understood. The electrodes in his brain track the firing of his neurons to predict what he’s trying to say, then a speech generator using a digitally reconstructed version of his pre-ALS voice talks for him — to the delight of everyone involved.
Far from making him feel like a cyborg, the technology is enhancing his human connections. Friends jump into conversations that were impossible a year ago. Discussions with his wife veer deeper than the merely transactional. The implant gives him a sense of normalcy.
“People like me don’t get to feel like that very often,” he says.
Companies including Neuralink, Paradromics, Synchron and Precision Neuroscience are working on similar technology. Neuralink’s device was implanted for the first time in a human earlier this year in a paraplegic man, with results he described as mind-blowing. Synchron has implants in several patients, though they don’t contain as many electrodes.
Harrell’s electrodes sit in his precentral gyrus, a pathway involved in speech that runs roughly ear to ear across the top of the brain. They were implanted during a five-hour surgery in July 2023, weeks after a relative heard about the technology at UC Davis, where researchers were looking for their first patient.
“On one hand, you don’t want to rush into brain surgery,” Harrell said during an interview at his home in May. “On the other hand, when you have this disease, the level of risk that you’d accept is definitely higher.”
Harrell, who uses a wheelchair, lacks the motor control to manoeuvre a traditional mouse. Before surgery, he used a head mouse to tap out texts and emails, which he can now compose with the aid of the device.
“It’s empowering him to say what he wants to say, when he wants to say it,” said David Brandman, the neurosurgeon at UC Davis who implanted the device. “It’s a different level.”
With quips and observations flowing, it seems like the BCI is reading Harrell’s thoughts. But that’s not what’s happening, according to Brandman and fellow neuroscientist Sergey Stavisky, who together run UC Davis’ Neuroprosthetics Lab.
Instead, it’s tracking the attempted movements associated with the words he’s trying to say. As the brain tries to activate muscles in the tongue, larynx and throat — instructions that get waylaid as Harrell’s neurons die from the ALS — the implanted electrodes pick up the messages.
The signals are processed and decoded by four computers running in his Oakland apartment, a rig that Nicholas Card, a postdoctoral scholar in neurological surgery at UC Davis and the study’s lead author, says could shrink in time.
A recurrent neural network first predicts the probability of what phonemes, or sound unit, he’s trying to say. A preliminary language model assembles those sound units into possible words and phrases, then a second, more refined language model crafts the most likely set of words. After a phrase is decoded, Harrell can play an audio version in his own voice.
It doesn’t always work perfectly: When a Bloomberg reporter was there, it mistakenly displayed “dirty” instead of “nerdy”. In those cases, Harrell can ask it to try again by activating a button on the screen, using a portable eye tracker made by Tobii.
Rudimentary versions of the technology date to the 2004 original implant in the brain of Matt Nagle, a Massachusetts man paralysed during a stabbing. While much of the innovation since then has focused on speed, the real breakthrough with Harrell’s device comes in accuracy.
After less than two hours of training, most of it on a 125,000-word vocabulary, the BCI was 90 per cent accurate. That rose to 95 per cent within days and later hit 97 per cent.
Despite the slower pace of decoding, it’s enough for him to participate in conversations at home and help plan strategies for his employer, the Sunrise Project, an Australian nonprofit. He works on projects to tackle climate change, including targeting financial institutions.
The technology’s commercial potential is limited by the small current patient population and high cost. But that could change, and investors are watching.
“We consistently track developments in research, especially at universities,” said Konstantine Buhler, a venture capitalist at Sequoia Capital, who applauded the “very impressive” UC Davis work. “Academic institutions are well positioned to fund science and research, whereas venture capital is most helpful when a technology transitions from ‘research’ to ‘development’.”
One area where Harrell’s device stands out: after training, his wife can handle it herself. That contrasts with many implants that require licensed medical personnel to operate and are used mainly in labs.
So far, he’s the only patient to get the UC Davis technology, which plugs in through connectors on the top of his head. It makes him prone to infections and can be disconcerting for those seeing the juxtaposition of wires and brain for the first time. Harrell says none of that matters.
“People like me don’t have time to wait for the perfect product,” he says. “It’s still good enough for me, right now.” – Bloomberg

en_USEnglish