A research team at Columbia University has managed to reconstruct speech from the human brain’s auditory cortex using a combination of brain implants and deep learning methods. Previously, reconstructing speech from thoughts coming from the human brain via speech neural prosthetics (sensory implants) have been achieved, but the level of output quality was judged to be too low for practical usage. In order to get around this problem, the team placed electrodes in the brains of five volunteers scheduled
to have brain surgery for epilepsy, and they were asked to listen to a closed-set of sentences and numbers with the resultant brain activity used to train deep-learning speech recognition software. This experiment yielded a 75% intelligibility score (not accuracy, but how comprehensive the speech was). This outcome provides some hope for developing a technology that would allow people with speech paralysis to communicate. This would require giant amounts of data to train the AI, as this case study was limited to only 5 users with a closed data input set. However, the principles could be used to develop speech reproduction from human thought.