sciencehabit shares a report from Science Magazine: A man unable to speak after a stroke has produced sentences through a system that reads electrical signals from speech production areas of his brain, researchers report this week. The approach has previously been used in non-disabled volunteers to reconstruct spoken or imagined sentences, but this is the first demonstration of its potential in the type of patient it’s intended to help. The participant had a stroke more than 10 years ago that left him with anarthria — an inability to control the muscles involved in speech. Researchers used a computational model known as a deep-learning algorithm to interpret patterns of brain activity in the sensorimotor cortex, a brain region involved in producing speech, and ‘decoded’ sentences he attempted to read aloud. In the new study, [the researchers] temporarily removed a portion of the participant’s skull and laid a thin sheet of electrodes smaller than a credit card directly over his sensorimotor cortex. To “train” a computer algorithm to associate brain activity patterns with the onset of speech and with particular words, the team needed reliable information about what the man intended to say and when. So the researchers repeatedly presented one of 50 words on a screen and asked the man to attempt to say it on cue. Once the algorithm was trained with data from the individual word task, the man tried to read sentences built from the same set of 50 words, such as “Bring my glasses, please.” To improve the algorithm’s guesses, the researchers added a processing component called a natural language model, which uses common word sequences to predict the likely next word in a sentence. With that approach, the system only got about 25% of the words in a sentence wrong, they report today in The New England Journal of Medicine. With the new approach, the man could produce sentences at a rate of up to 18 words per minute. Read more of this story at Slashdot.