An end-to-end system that monitors the brain activity of hearing-impaired individuals to enhance speech intelligibility has been developed at the Columbia University School of Engineering and Applied Science, bringing cognitive hearing aids one step closer to reality (J Neural Eng. 2017). The system, which combines the latest single-channel automatic speech-separation algorithms into the auditory attention-decoding platform, automatically separates the individual speakers in the mixture it receives, and determines which speaker is being listened to using the listener's neural signals. It then amplifies the attended speaker's voice to assist the listener. The process is completed in less than 10 seconds.
This approach removes the limitations of existing methods for producing clean sound sources and sound amplification. The system alleviates the spatial separation requirements of multi-channel methods, and can be used in tandem with beamforming methods for optimal source separation. The researchers said "this work will move the field toward realistic hearing aid devices that can automatically and dynamically track a user's direction of attention, and amplify an attended speaker."
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2fnh58s
via IFTTT
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου