Πέμπτη 28 Ιουνίου 2018

Electric and acoustic harmonic integration predicts speech-in-noise performance in hybrid cochlear implant users

S03785955.gif

Publication date: Available online 28 June 2018
Source:Hearing Research
Author(s): Damien Bonnard, Adam Schwalje, Bruce Gantz, Inyong Choi
BackgroundPitch perception of complex tones relies on place or temporal fine structure-based mechanisms from resolved harmonics and the temporal envelope of unresolved harmonics. Combining this information is essential for speech-in-noise performance, as it allows segregation of a target speaker from background noise. In hybrid cochlear implant (H-CI) users, low frequency acoustic hearing should provide pitch from resolved harmonics while high frequency electric hearing should provide temporal envelope pitch from unresolved harmonics. How the acoustic and electric auditory inputs interact for H-CI users is largely unknown. Harmonicity and inharmonicity are emergent features of sound in which overtones are concordant or discordant with the fundamental frequency. We hypothesized that some H-CI users would be able to integrate acoustic and electric information for complex tone pitch perception, and that this ability would be correlated with speech-in-noise performance. In this study, we used perception of inharmonicity to demonstrate this integration.MethodsFifteen H-CI users with only acoustic hearing below 500 Hz, only electric hearing above 2 kHz, and more than 6 months CI experience, along with eighteen normal hearing (NH) controls, were presented with harmonic and inharmonic sounds. The stimulus was created with a low frequency component, corresponding with the H-CI user's acoustic hearing (fundamental frequency between 125 and 174 Hz), and a high frequency component, corresponding with electric hearing. Subjects were asked to identify the more inharmonic sound, which requires the perceptual integration of the low and high components. Speech-in-noise performance was tested in both groups using the California Consonant Test (CCT), and perception of Consonant-Nucleus-Consonant (CNC) words in quiet and AzBio sentences in noise were tested for the H-CI users.ResultsEight of the H-CI subjects (53%), and all of the NH subjects, scored significantly above chance level for at least one subset of the inharmonicity detection task. Inharmonicity detection ability, but not age or pure tone average, predicted speech scores in a linear model. These results were significantly correlated with speech scores in both quiet and noise for H-CI users, but not with speech in noise performance for NH listeners. Musical experience predicted inharmonicity detection ability, but did not predict speech performance.ConclusionsWe demonstrate integration of acoustic and electric information in H-CI users for complex pitch sensation. The correlation with speech scores in H-CI users might be associated with the ability to segregate a target speaker from background noise using the speaker's fundamental frequency.



from #Audiology via ola Kala on Inoreader https://ift.tt/2lG0UmL
via IFTTT

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου