Sunday , February 28 2021

Scientists train AI to turn brain signals into speech



gettyimages-91560242

The researchers worked with patients with epilepsy undergoing brain surgery.

Pasajes / Getty Images gallery of sciences

The neuroengineers have developed an advanced device that uses neuronal networks of automatic learning to read brain activity and translate it into speech.

An article, published on Tuesday in the journal Scientific Reports, explains how the Zuckerman Mind Brain Behavior Institute's team at Columbia University used deep learning algorithms and the same type of technology that drives devices like the Siri d & # 39; Apple and Amazon Echo to make them "accurate and an intelligible rebuilt speech." He The investigation was reported earlier this month but the magazine article goes further.

The human-computer framework could eventually provide patients who have lost the ability to speak an opportunity to use their thoughts to communicate verbally through a synthetic robotic voice.

"We have shown that, with the right technology, the thoughts of these people could be decoded and understood by any listener," said Nima Mesgarani, the project's principal investigator in a statement.

When we talk, our brains light up, sending electrical signals to the old box of thought. If scientists can decode these signals and understand how they are related to formulating or listening words, we take a step further to translate them into the discourse. With sufficient understanding and great processing power, you can create a device that translates directly into the thought of talking.

And that is what the team has achieved, creating a "vocoder" that uses algorithms and neuronal networks to turn signals into speech.

For this, the research team asked five patients with epilepsy who were already undergoing brain surgery to help them. They connected electrodes to different exposed surfaces of the brain, then the patients had 40 seconds of spoken sentences, repeated randomly six times. Listening to stories helped shape vocoder.

Then, the patients listened to the speakers who counted between zero and nine, while their brain signals returned to feed the vocoder. The algorithm of the vocoder, known as WORLD, spit out its own sounds, which were cleaned by a neural network, which triggered a robotic discourse that imitated the count. You can hear what sounds here. It is not perfect, but it is understandable.

"We have found that people could understand and repeat sounds about 75 percent of the time, which is far above and beyond the previous attempts," Mesgarani said.

The researchers concluded that the precision of the reconstruction is based on the amount of electrodes that were planted in the brain of the patient and the time during which the vocoder was trained. As it was to be expected, increasing the electrodes and increasing the duration of the training allows the vocoder to obtain more data and achieve a better reconstruction.

Looking ahead, the team wants to prove what kind of signals emit when a person is just imagining talking, as opposed to listening to the speech. They also hope to try a more complex set of words and phrases. The improvement of the algorithms with more data could eventually lead to a brain implant that omits the speech completely, turning the thoughts of a person into words.

This would be a monumental step for many.

"What has lost its ability to speak, either by injury or illness, would give the renewed possibility of connecting with the world around them," Mesgarani said.

NASA is 60 years old: the space agency has taken humanity beyond anyone, and plans to go further.

Extension to the ends: mix insignes situations – erupting volcanoes, nuclear mergers, waves of 30 feet – with daily technology. This is what happens.


Source link