Facebook’s work in neural input technology for AR and VR appears to be moving in a more wrist-based direction, but the company continues to invest in research on embedded brain-computer interfaces. The latest phase of a year-long Facebook-funded study from UCSF, called Project Steno, translates speech experiments from speech-impaired paralyzed patients into words on screen.
“This is the first time someone has naturally tried to say words can be decoded into words just from brain activity,” said Dr. David Moses, lead author of the study published Wednesday in the New England Journal of Medicine. “Hopefully, this is proof of principle for direct speech control from a communication device, using speech trials intended as a control signal by someone who is unable to speak, who is paralyzed.”
The brain-computer interface (BCI) has been behind several recent successes, including Stanford research that could turn imagined handwriting into projected text. The UCSF study took a different approach, analyzing actual speech trials and acting almost like a translator.
The study conducted by UCSF neurosurgeon Dr. Edward Chang, involved an electrode “neuroprosthesis” implant in a paralyzed man who suffered a brain stem stroke at the age of 20 years. With a patch of electrodes implanted in the part of the brain associated with vocal canal control, the man struggled to answer questions displayed on the screen. The UCSF machine learning algorithm is able to recognize 50 words and convert them into real -time sentences. For example, if a patient sees a request asking, “How are you today?” the response appears on the screen as “I’m very good,” appearing word for word.
Moses explained that the work will be intended to go beyond Facebook’s funding phase and that the research still has a lot of work to do. It is currently unclear how much speech recognition comes from recorded patterns of brain activity, or vocal utterances, or a combination of the two.
Musa was quick to explain that the study, like other BCI works, was not mind reading: it relied on sensing brain activity that occurred specifically when attempting to perform certain behaviors, such as speaking. Moses also said the work of the UCSF team has not been translated into non -invasive neural interfaces. Neuralink Elon Musk promises wireless transmission of data from brain -implanted electrodes for future research and auxiliary use, but so far the technology has only been shown to monkeys.
Meanwhile, Facebook Reality Labs Research has shifted from a head-worn brain-computer interface to future VR / AR headsets, spinning for the near future to focus on wrist-worn devices based on technology acquired from CTRL-Labs. Facebook’s Reality Lab has a non-invasive prototype that uses a research headset worn to study brain activity, and the company has announced it plans to prepare it for open source research projects as it stops focusing on head-mounted neural hardware. (UCSF receives funding from Facebook but no hardware.)
“Aspects of the optical head assembly work will take place for our EMG research on the wrist. We will continue to use optical BCI as a research tool to build better wrist -based sensor models and algorithms. While we will continue to leverage these prototypes in our research, we no longer developing head-mounted optical BCI devices to sense speech production. That is one reason why we will share head-mounted hardware prototypes with other researchers, who can apply our innovations to other use cases, “a Facebook representative confirmed via e- honey.
However, user -targeted neural input technology is still in its infancy. Although consumer devices that use non -invasive head or wrist sensors are available, these devices are far more accurate than the currently implanted electrodes.