MADRID, 22 Jun. (Portaltic/EP) –
A study at the University of California in San Diego (United States) has trained a machine learning algorithm to be able to translate brain activity of a kind of bird and rebuild with it the song of the birds.
Through a proof of concept, the American research succeeded in reproducing the bird’s complex vocalizations when singing, including the pitch, volume and timbre of the original bird. zebra finch (taeniopygia guttata).
Researchers have highlighted the possibilities offered by the study for the creation of speech prostheses with which people who have lost this ability can speak at the same rate at which they think, and they have highlighted that the vocalization of birds is similar in many respects to human speech and is also a learned behavior.
This advance is close to “the next frontier of functional recovery“, as has assured the professor of psychology and main author, Timothy Gentner, in a statement from the University of California San Diego.
To generate the song of the birds, the system measures the brain signals of finch specimens through the use of silicon electrode implants, placed in the part of the brain occupied by moving the muscles responsible for singing.
Subsequently, it uses technologies to decode brain signals, through the use of machine learning algorithms that generate computer-created copies of the song of the finches.
This algorithm is trained by mathematical equations that take into account the changes in pressure and tension that take place in the vocal organ of the finches, called the syrinx, while the bird sings. Using an algorithm allows for greater efficiency than using raw brain signals, according to the study authors.
Since the same thing happens to birds as humans and they cannot speak fluently when hearing their voice with delay, the use of this technique to create speech prostheses has to ensure that the latency is low.