Τετάρτη 12 Απριλίου 2017

A non-toxic dose of cobalt chloride blocks hair cells of the zebrafish lateral line

Publication date: Available online 12 April 2017
Source:Hearing Research
Author(s): William J. Stewart, Jacob L. Johansen, James C. Liao
Experiments on the flow-sensitive lateral line system of fishes have provided important insights into the function and sensory transduction of vertebrate hair cells. A common experimental approach has been to pharmacologically block lateral line hair cells and measure how behavior changes. Cobalt chloride (CoCl2) blocks the lateral line by inhibiting calcium movement through the membrane channels of hair cells, but high concentrations can be toxic, making it unclear whether changes in behavior are due to a blocked lateral line or poor health. Here, we identify a non-toxic treatment of cobalt that completely blocks lateral line hair cells. We exposed 5-day post fertilization zebrafish larvae to CoCl2 concentrations ranging from 1-20 mM for 15 minutes and measured 1) the spiking rate of the afferent neurons contacting hair cells and 2) the larvae’s health and long-term survival. Our results show that a 15-minute exposure to 5 mM CoCl2 abolishes both spontaneous and evoked afferent firing. This treatment does not change swimming behavior, and results in >85% survival after 5 days. Weaker treatments of CoCl2 did not eliminate afferent activity, while stronger treatments caused close to 50% mortality. Our work provides a guideline for future zebrafish investigations where physiological confirmation of a blocked lateral line system is required.



from #Audiology via ola Kala on Inoreader http://ift.tt/2o9kk21
via IFTTT

Delayed Changes in Auditory Status in Cochlear Implant Users with Preserved Acoustic Hearing

Publication date: Available online 12 April 2017
Source:Hearing Research
Author(s): Rachel A. Scheperle, Viral D. Tejani, Julia K. Omtvedt, Carolyn J. Brown, Paul J. Abbas, Marlan R. Hansen, Bruce J. Gantz, Jacob J. Oleson, Marie V. Ozanne
This retrospective review explores delayed-onset hearing loss in 85 individuals receiving cochlear implants designed to preserve acoustic hearing at the University of Iowa Hospitals and Clinics between 2001 and 2015. Repeated measures of unaided behavioral audiometric thresholds, electrode impedance, and electrically evoked compound action potential (ECAP) amplitude growth functions were used to characterize longitudinal changes in auditory status. Participants were grouped into two primary categories according to changes in unaided behavioral thresholds: (1) stable hearing or symmetrical hearing loss and (2) delayed loss of hearing in the implanted ear. Thirty-eight percent of this sample presented with delayed-onset hearing loss of various degrees and rates of change. Neither array type nor insertion approach (round window or cochleostomy) had a significant effect on prevalence. Electrode impedance increased abruptly for many individuals exhibiting precipitous hearing loss; the increase was often transient. The impedance increases were significantly larger than the impedance changes observed for individuals with stable or symmetrical hearing loss. Moreover, the impedance changes were associated with changes in behavioral thresholds for individuals with a precipitous drop in behavioral thresholds. These findings suggest a change in the electrode environment coincident with the change in auditory status. Changes in ECAP thresholds, growth function slopes, and suprathreshold amplitudes were not correlated with changes in behavioral thresholds, suggesting that neural responsiveness in the region excited by the implant is relatively stable. Further exploration into etiology of delayed-onset hearing loss post implantation is needed, with particular interest in mechanisms associated with changes in the intracochlear environment.



from #Audiology via ola Kala on Inoreader http://ift.tt/2ouGrTV
via IFTTT

A non-toxic dose of cobalt chloride blocks hair cells of the zebrafish lateral line

Publication date: Available online 12 April 2017
Source:Hearing Research
Author(s): William J. Stewart, Jacob L. Johansen, James C. Liao
Experiments on the flow-sensitive lateral line system of fishes have provided important insights into the function and sensory transduction of vertebrate hair cells. A common experimental approach has been to pharmacologically block lateral line hair cells and measure how behavior changes. Cobalt chloride (CoCl2) blocks the lateral line by inhibiting calcium movement through the membrane channels of hair cells, but high concentrations can be toxic, making it unclear whether changes in behavior are due to a blocked lateral line or poor health. Here, we identify a non-toxic treatment of cobalt that completely blocks lateral line hair cells. We exposed 5-day post fertilization zebrafish larvae to CoCl2 concentrations ranging from 1-20 mM for 15 minutes and measured 1) the spiking rate of the afferent neurons contacting hair cells and 2) the larvae’s health and long-term survival. Our results show that a 15-minute exposure to 5 mM CoCl2 abolishes both spontaneous and evoked afferent firing. This treatment does not change swimming behavior, and results in >85% survival after 5 days. Weaker treatments of CoCl2 did not eliminate afferent activity, while stronger treatments caused close to 50% mortality. Our work provides a guideline for future zebrafish investigations where physiological confirmation of a blocked lateral line system is required.



from #Audiology via ola Kala on Inoreader http://ift.tt/2o9kk21
via IFTTT

Delayed Changes in Auditory Status in Cochlear Implant Users with Preserved Acoustic Hearing

Publication date: Available online 12 April 2017
Source:Hearing Research
Author(s): Rachel A. Scheperle, Viral D. Tejani, Julia K. Omtvedt, Carolyn J. Brown, Paul J. Abbas, Marlan R. Hansen, Bruce J. Gantz, Jacob J. Oleson, Marie V. Ozanne
This retrospective review explores delayed-onset hearing loss in 85 individuals receiving cochlear implants designed to preserve acoustic hearing at the University of Iowa Hospitals and Clinics between 2001 and 2015. Repeated measures of unaided behavioral audiometric thresholds, electrode impedance, and electrically evoked compound action potential (ECAP) amplitude growth functions were used to characterize longitudinal changes in auditory status. Participants were grouped into two primary categories according to changes in unaided behavioral thresholds: (1) stable hearing or symmetrical hearing loss and (2) delayed loss of hearing in the implanted ear. Thirty-eight percent of this sample presented with delayed-onset hearing loss of various degrees and rates of change. Neither array type nor insertion approach (round window or cochleostomy) had a significant effect on prevalence. Electrode impedance increased abruptly for many individuals exhibiting precipitous hearing loss; the increase was often transient. The impedance increases were significantly larger than the impedance changes observed for individuals with stable or symmetrical hearing loss. Moreover, the impedance changes were associated with changes in behavioral thresholds for individuals with a precipitous drop in behavioral thresholds. These findings suggest a change in the electrode environment coincident with the change in auditory status. Changes in ECAP thresholds, growth function slopes, and suprathreshold amplitudes were not correlated with changes in behavioral thresholds, suggesting that neural responsiveness in the region excited by the implant is relatively stable. Further exploration into etiology of delayed-onset hearing loss post implantation is needed, with particular interest in mechanisms associated with changes in the intracochlear environment.



from #Audiology via ola Kala on Inoreader http://ift.tt/2ouGrTV
via IFTTT

Brain Activity During Phonation in Women With Muscle Tension Dysphonia: An fMRI Study

S08921997.gif

Publication date: Available online 11 April 2017
Source:Journal of Voice
Author(s): Maryna Kryshtopava, Kristiane Van Lierde, Iris Meerschman, Evelien D'Haeseleer, Pieter Vandemaele, Guy Vingerhoets, Sofie Claeys
ObjectivesThe main objectives of this functional magnetic resonance imaging (fMRI) study are (1) to investigate brain activity during phonation in women with muscle tension dysphonia (MTD) in comparison with healthy controls; and (2) to explain the neurophysiological mechanism of laryngeal hyperfunction/tension during phonation in patients with MTD.MethodsTen women with MTD and fifteen healthy women participated in this study. The fMRI experiment was carried out using a block design paradigm. Brain activation during phonation and exhalation was analyzed using BrainVoyager software.ResultsThe statistical analysis of fMRI data has demonstrated that MTD patients control phonation by use of the auditory, motor, frontal, parietal, and subcortical areas similar to phonation control by healthy people. Comparison of phonation tasks in the two groups revealed higher brain activities in the precentral gyrus, inferior, middle and superior frontal gyrus, lingual gyrus, insula, cerebellum, midbrain, and brainstem as well as lower brain activities in the cingulate gyrus, superior and middle temporal gyrus, and inferior parietal lobe in the MTD group. No differences were found between the two groups regarding exhalation control.ConclusionsThe findings in this study provide insight into phonation and exhalation control in patients with MTD. The imaging results demonstrated that in patients with MTD, altered (higher/lower) brain activities may result in laryngeal tension and vocal hyperfunction.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2o7rhRT
via IFTTT

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Related Articles

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Int J Audiol. 2017 Apr 10;:1-11

Authors: de Taillez T, Grimm G, Kollmeier B, Neher T

Abstract
OBJECTIVE: To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC).
DESIGN: Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality.
STUDY SAMPLE: Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each).
RESULTS: IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality.
CONCLUSIONS: The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

PMID: 28395561 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osfP6h
via IFTTT

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Related Articles

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Int J Audiol. 2017 Apr 10;:1-7

Authors: Vroegop JL, Dingemanse JG, Homans NC, Goedegebure A

Abstract
OBJECTIVE: To evaluate the benefit of a wireless remote microphone (MM) for speech recognition in noise in bimodal adult cochlear implant (CI) users both in a test setting and in daily life.
DESIGN: This prospective study measured speech reception thresholds in noise in a repeated measures design with factors including bimodal hearing and MM use. The participants also had a 3-week trial period at home with the MM.
STUDY SAMPLE: Thirteen post-lingually deafened adult bimodal CI users.
RESULTS: A significant improvement in SRT of 5.4 dB was found between the use of the CI with the MM and the use of the CI without the MM. By also pairing the MM to the hearing aid (HA) another improvement in SRT of 2.2 dB was found compared to the situation with the MM paired to the CI alone. In daily life, participants reported better speech perception for various challenging listening situations, when using the MM in the bimodal condition.
CONCLUSION: There is a clear advantage of bimodal listening (CI and HA) compared to CI alone when applying advanced wireless remote microphone techniques to improve speech understanding in adult bimodal CI users.

PMID: 28395552 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osg5Sv
via IFTTT

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Related Articles

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Int J Audiol. 2017 Apr 10;:1-9

Authors: Bruijnzeel H, Cattani G, Stegeman I, Topsakal V, Grolman W

Abstract
OBJECTIVE: To compare speech perception between children with a different age at cochlear implantation.
DESIGN: We evaluated speech perception by comparing consonant-vowel-consonant (auditory) (CVC(A)) scores at five-year follow-up of children implanted between 1997 and 2010. The proportion of children from each age-at-implantation group reaching the 95%CI of CVC(A) ceiling scores (>95%) was calculated to identify speech perception differences masked by ceiling effects.
STUDY SAMPLE: 54 children implanted between 8 and 36 months.
RESULTS: Although ceiling effects occurred, a CVC(A) score difference between age-at-implantation groups was confirmed (H (4) = 30.36; p < 0.001). Outperformance of early (<18 months) compared to later implanted children was demonstrated (p <0.001). A larger proportion of children implanted before 13 months compared to children implanted between 13 and 18 months reached ceiling scores. Logistic regression confirmed that age at implantation predicted whether a child reached a ceiling score.
CONCLUSIONS: Ceiling effects can mask thorough delineation of speech perception. However, this study showed long-term speech perception outperformance of early implanted children (<18 months) either including or not accounting for ceiling effects during analysis. Development of long-term assessment tools not affected by ceiling effects is essential to maintain adequate assessment of young implanted infants.

PMID: 28395548 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7bLGD
via IFTTT

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Related Articles

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Int J Audiol. 2017 Apr 10;:1-8

Authors: Tanniru K, Narne VK, Jain C, Konadath S, Singh NK, Sreenivas KJ, K A

Abstract
OBJECTIVE: To develop sentence lists in the Telugu language for the assessment of speech recognition threshold (SRT) in the presence of background noise through identification of the mean signal-to-noise ratio required to attain a 50% sentence recognition score (SRTn).
DESIGN: This study was conducted in three phases. The first phase involved the selection and recording of Telugu sentences. In the second phase, 20 lists, each consisting of 10 sentences with equal intelligibility, were formulated using a numerical optimisation procedure. In the third phase, the SRTn of the developed lists was estimated using adaptive procedures on individuals with normal hearing.
STUDY SAMPLE: A total of 68 native Telugu speakers with normal hearing participated in the study. Of these, 18 (including the speakers) performed on various subjective measures in first phase, 20 performed on sentence/word recognition in noise for second phase and 30 participated in the list equivalency procedures in third phase.
RESULTS: In all, 15 lists of comparable difficulty were formulated as test material. The mean SRTn across these lists corresponded to -2.74 (SD = 0.21).
CONCLUSIONS: The developed sentence lists provided a valid and reliable tool to measure SRTn in Telugu native speakers.

PMID: 28395544 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7nuFo
via IFTTT

Assessment of balance and vestibular functions in patients with idiopathic sudden sensorineural hearing loss.

Related Articles

Assessment of balance and vestibular functions in patients with idiopathic sudden sensorineural hearing loss.

J Huazhong Univ Sci Technolog Med Sci. 2017 Apr;37(2):264-270

Authors: Liu J, Zhou RH, Liu B, Leng YM, Liu JJ, Liu DD, Zhang SL, Kong WJ

Abstract
This study investigated the relationship among the severity of hearing impairment, vestibular function and balance function in patients with idiopathic sudden sensorineural hearing loss (ISSNHL). A total of 35 ISSNHL patients (including 21 patients with vertigo) were enrolled. All of the patients underwent audiometry, sensory organization test (SOT), caloric test, cervical vestibular-evoked myogenic potential (cVEMP) test and ocular vestibular-evoked myogenic potential (oVEMP) test. Significant relationship was found between vertigo and hearing loss grade (P=0.009), and between SOT VEST grade and hearing loss grade (P=0.001). The abnormal rate of oVEMP test was the highest, followed by the abnormal rates of caloric and cVEMP tests, not only in patients with vertigo but also in those without vertigo. The vestibular end organs were more susceptible to damage in patients with vertigo (compared with patients without vertigo). Significant relationship was found between presence of vertigo and SOT VEST grade (P=0.010). We demonstrated that vestibular end organs may be impaired not only in patients with vertigo but also in patients without vertigo. The cochlear and vestibular impairment could be more serious in patients with vertigo than in those without vertigo. Vertigo does not necessarily bear a causal relationship with the impairment of the vestibular end organs. SOT VEST grade could be used to reflect the presence of vertigo state in the ISSNHL patients. Apart from audiometry, the function of peripheral vestibular end organs and balance function should be evaluated to comprehensively understand ISSNHL. Better assessment of the condition will help us in clinical diagnosis, treatment and prognosis evaluation of ISSNHL.

PMID: 28397037 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p74NBs
via IFTTT

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Related Articles

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Int J Audiol. 2017 Apr 10;:1-11

Authors: de Taillez T, Grimm G, Kollmeier B, Neher T

Abstract
OBJECTIVE: To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC).
DESIGN: Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality.
STUDY SAMPLE: Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each).
RESULTS: IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality.
CONCLUSIONS: The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

PMID: 28395561 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osfP6h
via IFTTT

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Related Articles

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Int J Audiol. 2017 Apr 10;:1-7

Authors: Vroegop JL, Dingemanse JG, Homans NC, Goedegebure A

Abstract
OBJECTIVE: To evaluate the benefit of a wireless remote microphone (MM) for speech recognition in noise in bimodal adult cochlear implant (CI) users both in a test setting and in daily life.
DESIGN: This prospective study measured speech reception thresholds in noise in a repeated measures design with factors including bimodal hearing and MM use. The participants also had a 3-week trial period at home with the MM.
STUDY SAMPLE: Thirteen post-lingually deafened adult bimodal CI users.
RESULTS: A significant improvement in SRT of 5.4 dB was found between the use of the CI with the MM and the use of the CI without the MM. By also pairing the MM to the hearing aid (HA) another improvement in SRT of 2.2 dB was found compared to the situation with the MM paired to the CI alone. In daily life, participants reported better speech perception for various challenging listening situations, when using the MM in the bimodal condition.
CONCLUSION: There is a clear advantage of bimodal listening (CI and HA) compared to CI alone when applying advanced wireless remote microphone techniques to improve speech understanding in adult bimodal CI users.

PMID: 28395552 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osg5Sv
via IFTTT

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Related Articles

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Int J Audiol. 2017 Apr 10;:1-9

Authors: Bruijnzeel H, Cattani G, Stegeman I, Topsakal V, Grolman W

Abstract
OBJECTIVE: To compare speech perception between children with a different age at cochlear implantation.
DESIGN: We evaluated speech perception by comparing consonant-vowel-consonant (auditory) (CVC(A)) scores at five-year follow-up of children implanted between 1997 and 2010. The proportion of children from each age-at-implantation group reaching the 95%CI of CVC(A) ceiling scores (>95%) was calculated to identify speech perception differences masked by ceiling effects.
STUDY SAMPLE: 54 children implanted between 8 and 36 months.
RESULTS: Although ceiling effects occurred, a CVC(A) score difference between age-at-implantation groups was confirmed (H (4) = 30.36; p < 0.001). Outperformance of early (<18 months) compared to later implanted children was demonstrated (p <0.001). A larger proportion of children implanted before 13 months compared to children implanted between 13 and 18 months reached ceiling scores. Logistic regression confirmed that age at implantation predicted whether a child reached a ceiling score.
CONCLUSIONS: Ceiling effects can mask thorough delineation of speech perception. However, this study showed long-term speech perception outperformance of early implanted children (<18 months) either including or not accounting for ceiling effects during analysis. Development of long-term assessment tools not affected by ceiling effects is essential to maintain adequate assessment of young implanted infants.

PMID: 28395548 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7bLGD
via IFTTT

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Related Articles

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Int J Audiol. 2017 Apr 10;:1-8

Authors: Tanniru K, Narne VK, Jain C, Konadath S, Singh NK, Sreenivas KJ, K A

Abstract
OBJECTIVE: To develop sentence lists in the Telugu language for the assessment of speech recognition threshold (SRT) in the presence of background noise through identification of the mean signal-to-noise ratio required to attain a 50% sentence recognition score (SRTn).
DESIGN: This study was conducted in three phases. The first phase involved the selection and recording of Telugu sentences. In the second phase, 20 lists, each consisting of 10 sentences with equal intelligibility, were formulated using a numerical optimisation procedure. In the third phase, the SRTn of the developed lists was estimated using adaptive procedures on individuals with normal hearing.
STUDY SAMPLE: A total of 68 native Telugu speakers with normal hearing participated in the study. Of these, 18 (including the speakers) performed on various subjective measures in first phase, 20 performed on sentence/word recognition in noise for second phase and 30 participated in the list equivalency procedures in third phase.
RESULTS: In all, 15 lists of comparable difficulty were formulated as test material. The mean SRTn across these lists corresponded to -2.74 (SD = 0.21).
CONCLUSIONS: The developed sentence lists provided a valid and reliable tool to measure SRTn in Telugu native speakers.

PMID: 28395544 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7nuFo
via IFTTT

Assessment of balance and vestibular functions in patients with idiopathic sudden sensorineural hearing loss.

Related Articles

Assessment of balance and vestibular functions in patients with idiopathic sudden sensorineural hearing loss.

J Huazhong Univ Sci Technolog Med Sci. 2017 Apr;37(2):264-270

Authors: Liu J, Zhou RH, Liu B, Leng YM, Liu JJ, Liu DD, Zhang SL, Kong WJ

Abstract
This study investigated the relationship among the severity of hearing impairment, vestibular function and balance function in patients with idiopathic sudden sensorineural hearing loss (ISSNHL). A total of 35 ISSNHL patients (including 21 patients with vertigo) were enrolled. All of the patients underwent audiometry, sensory organization test (SOT), caloric test, cervical vestibular-evoked myogenic potential (cVEMP) test and ocular vestibular-evoked myogenic potential (oVEMP) test. Significant relationship was found between vertigo and hearing loss grade (P=0.009), and between SOT VEST grade and hearing loss grade (P=0.001). The abnormal rate of oVEMP test was the highest, followed by the abnormal rates of caloric and cVEMP tests, not only in patients with vertigo but also in those without vertigo. The vestibular end organs were more susceptible to damage in patients with vertigo (compared with patients without vertigo). Significant relationship was found between presence of vertigo and SOT VEST grade (P=0.010). We demonstrated that vestibular end organs may be impaired not only in patients with vertigo but also in patients without vertigo. The cochlear and vestibular impairment could be more serious in patients with vertigo than in those without vertigo. Vertigo does not necessarily bear a causal relationship with the impairment of the vestibular end organs. SOT VEST grade could be used to reflect the presence of vertigo state in the ISSNHL patients. Apart from audiometry, the function of peripheral vestibular end organs and balance function should be evaluated to comprehensively understand ISSNHL. Better assessment of the condition will help us in clinical diagnosis, treatment and prognosis evaluation of ISSNHL.

PMID: 28397037 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p74NBs
via IFTTT

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Related Articles

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Int J Audiol. 2017 Apr 10;:1-11

Authors: de Taillez T, Grimm G, Kollmeier B, Neher T

Abstract
OBJECTIVE: To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC).
DESIGN: Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality.
STUDY SAMPLE: Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each).
RESULTS: IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality.
CONCLUSIONS: The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

PMID: 28395561 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osfP6h
via IFTTT

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Related Articles

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Int J Audiol. 2017 Apr 10;:1-7

Authors: Vroegop JL, Dingemanse JG, Homans NC, Goedegebure A

Abstract
OBJECTIVE: To evaluate the benefit of a wireless remote microphone (MM) for speech recognition in noise in bimodal adult cochlear implant (CI) users both in a test setting and in daily life.
DESIGN: This prospective study measured speech reception thresholds in noise in a repeated measures design with factors including bimodal hearing and MM use. The participants also had a 3-week trial period at home with the MM.
STUDY SAMPLE: Thirteen post-lingually deafened adult bimodal CI users.
RESULTS: A significant improvement in SRT of 5.4 dB was found between the use of the CI with the MM and the use of the CI without the MM. By also pairing the MM to the hearing aid (HA) another improvement in SRT of 2.2 dB was found compared to the situation with the MM paired to the CI alone. In daily life, participants reported better speech perception for various challenging listening situations, when using the MM in the bimodal condition.
CONCLUSION: There is a clear advantage of bimodal listening (CI and HA) compared to CI alone when applying advanced wireless remote microphone techniques to improve speech understanding in adult bimodal CI users.

PMID: 28395552 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osg5Sv
via IFTTT

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Related Articles

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Int J Audiol. 2017 Apr 10;:1-9

Authors: Bruijnzeel H, Cattani G, Stegeman I, Topsakal V, Grolman W

Abstract
OBJECTIVE: To compare speech perception between children with a different age at cochlear implantation.
DESIGN: We evaluated speech perception by comparing consonant-vowel-consonant (auditory) (CVC(A)) scores at five-year follow-up of children implanted between 1997 and 2010. The proportion of children from each age-at-implantation group reaching the 95%CI of CVC(A) ceiling scores (>95%) was calculated to identify speech perception differences masked by ceiling effects.
STUDY SAMPLE: 54 children implanted between 8 and 36 months.
RESULTS: Although ceiling effects occurred, a CVC(A) score difference between age-at-implantation groups was confirmed (H (4) = 30.36; p < 0.001). Outperformance of early (<18 months) compared to later implanted children was demonstrated (p <0.001). A larger proportion of children implanted before 13 months compared to children implanted between 13 and 18 months reached ceiling scores. Logistic regression confirmed that age at implantation predicted whether a child reached a ceiling score.
CONCLUSIONS: Ceiling effects can mask thorough delineation of speech perception. However, this study showed long-term speech perception outperformance of early implanted children (<18 months) either including or not accounting for ceiling effects during analysis. Development of long-term assessment tools not affected by ceiling effects is essential to maintain adequate assessment of young implanted infants.

PMID: 28395548 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7bLGD
via IFTTT

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Related Articles

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Int J Audiol. 2017 Apr 10;:1-8

Authors: Tanniru K, Narne VK, Jain C, Konadath S, Singh NK, Sreenivas KJ, K A

Abstract
OBJECTIVE: To develop sentence lists in the Telugu language for the assessment of speech recognition threshold (SRT) in the presence of background noise through identification of the mean signal-to-noise ratio required to attain a 50% sentence recognition score (SRTn).
DESIGN: This study was conducted in three phases. The first phase involved the selection and recording of Telugu sentences. In the second phase, 20 lists, each consisting of 10 sentences with equal intelligibility, were formulated using a numerical optimisation procedure. In the third phase, the SRTn of the developed lists was estimated using adaptive procedures on individuals with normal hearing.
STUDY SAMPLE: A total of 68 native Telugu speakers with normal hearing participated in the study. Of these, 18 (including the speakers) performed on various subjective measures in first phase, 20 performed on sentence/word recognition in noise for second phase and 30 participated in the list equivalency procedures in third phase.
RESULTS: In all, 15 lists of comparable difficulty were formulated as test material. The mean SRTn across these lists corresponded to -2.74 (SD = 0.21).
CONCLUSIONS: The developed sentence lists provided a valid and reliable tool to measure SRTn in Telugu native speakers.

PMID: 28395544 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7nuFo
via IFTTT

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Related Articles

Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

Int J Audiol. 2017 Apr 10;:1-11

Authors: de Taillez T, Grimm G, Kollmeier B, Neher T

Abstract
OBJECTIVE: To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC).
DESIGN: Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality.
STUDY SAMPLE: Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each).
RESULTS: IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality.
CONCLUSIONS: The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

PMID: 28395561 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osfP6h
via IFTTT

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Related Articles

Evaluation of a wireless remote microphone in bimodal cochlear implant recipients.

Int J Audiol. 2017 Apr 10;:1-7

Authors: Vroegop JL, Dingemanse JG, Homans NC, Goedegebure A

Abstract
OBJECTIVE: To evaluate the benefit of a wireless remote microphone (MM) for speech recognition in noise in bimodal adult cochlear implant (CI) users both in a test setting and in daily life.
DESIGN: This prospective study measured speech reception thresholds in noise in a repeated measures design with factors including bimodal hearing and MM use. The participants also had a 3-week trial period at home with the MM.
STUDY SAMPLE: Thirteen post-lingually deafened adult bimodal CI users.
RESULTS: A significant improvement in SRT of 5.4 dB was found between the use of the CI with the MM and the use of the CI without the MM. By also pairing the MM to the hearing aid (HA) another improvement in SRT of 2.2 dB was found compared to the situation with the MM paired to the CI alone. In daily life, participants reported better speech perception for various challenging listening situations, when using the MM in the bimodal condition.
CONCLUSION: There is a clear advantage of bimodal listening (CI and HA) compared to CI alone when applying advanced wireless remote microphone techniques to improve speech understanding in adult bimodal CI users.

PMID: 28395552 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2osg5Sv
via IFTTT

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Related Articles

Incorporating ceiling effects during analysis of speech perception data from a paediatric cochlear implant cohort.

Int J Audiol. 2017 Apr 10;:1-9

Authors: Bruijnzeel H, Cattani G, Stegeman I, Topsakal V, Grolman W

Abstract
OBJECTIVE: To compare speech perception between children with a different age at cochlear implantation.
DESIGN: We evaluated speech perception by comparing consonant-vowel-consonant (auditory) (CVC(A)) scores at five-year follow-up of children implanted between 1997 and 2010. The proportion of children from each age-at-implantation group reaching the 95%CI of CVC(A) ceiling scores (>95%) was calculated to identify speech perception differences masked by ceiling effects.
STUDY SAMPLE: 54 children implanted between 8 and 36 months.
RESULTS: Although ceiling effects occurred, a CVC(A) score difference between age-at-implantation groups was confirmed (H (4) = 30.36; p < 0.001). Outperformance of early (<18 months) compared to later implanted children was demonstrated (p <0.001). A larger proportion of children implanted before 13 months compared to children implanted between 13 and 18 months reached ceiling scores. Logistic regression confirmed that age at implantation predicted whether a child reached a ceiling score.
CONCLUSIONS: Ceiling effects can mask thorough delineation of speech perception. However, this study showed long-term speech perception outperformance of early implanted children (<18 months) either including or not accounting for ceiling effects during analysis. Development of long-term assessment tools not affected by ceiling effects is essential to maintain adequate assessment of young implanted infants.

PMID: 28395548 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7bLGD
via IFTTT

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Related Articles

Development of equally intelligible Telugu sentence-lists to test speech recognition in noise.

Int J Audiol. 2017 Apr 10;:1-8

Authors: Tanniru K, Narne VK, Jain C, Konadath S, Singh NK, Sreenivas KJ, K A

Abstract
OBJECTIVE: To develop sentence lists in the Telugu language for the assessment of speech recognition threshold (SRT) in the presence of background noise through identification of the mean signal-to-noise ratio required to attain a 50% sentence recognition score (SRTn).
DESIGN: This study was conducted in three phases. The first phase involved the selection and recording of Telugu sentences. In the second phase, 20 lists, each consisting of 10 sentences with equal intelligibility, were formulated using a numerical optimisation procedure. In the third phase, the SRTn of the developed lists was estimated using adaptive procedures on individuals with normal hearing.
STUDY SAMPLE: A total of 68 native Telugu speakers with normal hearing participated in the study. Of these, 18 (including the speakers) performed on various subjective measures in first phase, 20 performed on sentence/word recognition in noise for second phase and 30 participated in the list equivalency procedures in third phase.
RESULTS: In all, 15 lists of comparable difficulty were formulated as test material. The mean SRTn across these lists corresponded to -2.74 (SD = 0.21).
CONCLUSIONS: The developed sentence lists provided a valid and reliable tool to measure SRTn in Telugu native speakers.

PMID: 28395544 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2p7nuFo
via IFTTT

[Etiological analysis on patients in department of vertigo and dizziness oriented outpatient].

Related Articles

[Etiological analysis on patients in department of vertigo and dizziness oriented outpatient].

Zhonghua Yi Xue Za Zhi. 2017 Apr 11;97(14):1054-1056

Authors: Li F, Wang XG, Zhuang JH, Chen Y, Zhou XW, Gao B, Gu HH

Abstract
Objective: We aimed to explore the spectrum of causes for patients in department of vertigo and dizziness oriented outpatient, in order to provide a reference for diagnosis and treatment of patients with vertigo or dizziness. Methods: Retrospective analysis were carried out with clinical data of patients in our department of vertigo and dizziness oriented outpatient. The target group under study was diagnosed based on the uniform diagnostic criteria, and re-visiting patients were excluded. Results: This clinical study was conducted on 5 348 cases, who visited our vertigo and dizziness oriented outpatient from December 2012 to July 2015. The ratio of male to female was 1∶1.48, the age range was between 16 and 93. The frequencies of different etiology were: benign paroxysmal positional vertigo 1 902(35.56%), Chronic subjective dizziness 1 329(24.85%), vestibular migraine 624(11.67%), Meniere's disease 378(7.07%), multi-sensory neuropathy 231(4.32%), vestibular paroxysmia 177(3.31%), benign recurrent vestibulopathy 171(3.20%), presyncope 66(1.23%), posterior circulation ischemia 57(1.07%), vestibular neuritis 54(1.01%), sudden deafness complicated vertigo 36(0.67%), other reasons 68(1.27%), unknown 255(4.77%). Conclusions: Our study indicates that the precedent three causes for vertigo or dizziness are benign paroxysmal positional vertigo, chronic subjective dizziness and vestibular migraine, followed by Meniere's disease、multi-sensory neuropathy, vestibular paroxysmia and benign recurrent vestibulopathy. Presyncope, posterior circulation ischemia, vestibular neuritis and sudden deafness complicated vertigo are relatively infrequent. There are still a certain proportion of patients undetermined.

PMID: 28395427 [PubMed - in process]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2p7dFHv
via IFTTT

Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

Objectives: This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. Design: The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Results: Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant unique variance in any of the three language scores at 13-19 years beyond that accounted for by existing language scores at 6-10 years. Early confirmation accounted for significant unique variance in the expressive language information score at 13-19 years after adjusting for the corresponding score at 6-10 years (R2 change = 0.08, p = 0.03). Conclusions: This study found that while adolescent language scores were higher for deaf or hard of hearing teenagers exposed to UNHS and those who had their hearing loss confirmed by 9 months, these group differences were not significant within the whole sample. There was some evidence of a beneficial effect of early confirmation of hearing loss on relative expressive language gain from childhood to adolescence. Further examination of the effect of these variables on adolescent language outcomes in other cohorts would be valuable. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2psI3uW
via IFTTT

Multisensory Integration in Cochlear Implant Recipients.

Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2nDrKiD
via IFTTT

Listening Effort Through Depth of Processing in School-Age Children.

Objectives: A reliable and practical measure of listening effort is crucial in the aural rehabilitation of children with communication disorders. In this article, we propose a novel behavioral paradigm designed to measure listening effort in school-age children based on different depths and levels of verbal processing. The paradigm consists of a classic word recognition task performed in quiet and in noise coupled to one of three additional tasks asking the children to judge the color of simple pictures or a certain semantic category of the presented words. The response time (RT) from the categorization tasks is considered the primary indicator of listening effort. Design: The listening effort paradigm was evaluated in a group of 31 normal-hearing, normal-developing children 7 to 12 years of age. A total of 146 Dutch nouns were selected for the experiment after surveying 14 local Dutch-speaking children. Windows-based custom software was developed to administer the behavioral paradigm from a conventional laptop computer. A separate touch screen was used as a response interface to gather the RT data from the participants. Verbal repetition of each presented word was scored by the tester and a percentage-correct word recognition score (WRS) was calculated for each condition. Randomized lists of target words were presented in one of three signal to noise ratios (SNR) to examine the effect of background noise on the two outcome measures of WRS and RT. Three novel categorization tasks, each corresponding to a different depth or elaboration level of semantic processing, were developed to examine the effect of processing level on either WRS or RT. It was hypothesized that, while listening effort as measured by RT would be affected by both noise and processing level, WRS performance would be affected by changes in noise level only. The RT measure was also hypothesized to increase more from an increase in noise level in categorization conditions demanding a deeper or more elaborate form of semantic processing. Results: There was a significant effect of SNR level on school-age children's WRS: their word recognition performance tended to decrease with increasing background noise level. However, depth of processing did not seem to affect WRS. Moreover, a repeated-measure analysis of variance fitted to transformed RT data revealed that this measure of listening effort in normal-hearing school-age children was significantly affected by both SNR level and the depth of semantic processing. There was no significant interaction between noise level and the type of categorization task with regard to RT. Conclusions: The observed patterns of WRS and RT supported the hypotheses regarding the effects of background noise and depth of processing on word recognition performance and a behavioral measure of listening effort. The magnitude of noise-induced change in RT did not differ between categorization tasks, however. Our findings point to future research directions regarding the potential effects of age, working memory capacity, and cross-modality interaction when measuring listening effort in different levels of semantic processing. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2nDlAzb
via IFTTT

Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

Objectives: This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. Design: The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Results: Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant unique variance in any of the three language scores at 13-19 years beyond that accounted for by existing language scores at 6-10 years. Early confirmation accounted for significant unique variance in the expressive language information score at 13-19 years after adjusting for the corresponding score at 6-10 years (R2 change = 0.08, p = 0.03). Conclusions: This study found that while adolescent language scores were higher for deaf or hard of hearing teenagers exposed to UNHS and those who had their hearing loss confirmed by 9 months, these group differences were not significant within the whole sample. There was some evidence of a beneficial effect of early confirmation of hearing loss on relative expressive language gain from childhood to adolescence. Further examination of the effect of these variables on adolescent language outcomes in other cohorts would be valuable. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2psI3uW
via IFTTT

Multisensory Integration in Cochlear Implant Recipients.

Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2nDrKiD
via IFTTT

Listening Effort Through Depth of Processing in School-Age Children.

Objectives: A reliable and practical measure of listening effort is crucial in the aural rehabilitation of children with communication disorders. In this article, we propose a novel behavioral paradigm designed to measure listening effort in school-age children based on different depths and levels of verbal processing. The paradigm consists of a classic word recognition task performed in quiet and in noise coupled to one of three additional tasks asking the children to judge the color of simple pictures or a certain semantic category of the presented words. The response time (RT) from the categorization tasks is considered the primary indicator of listening effort. Design: The listening effort paradigm was evaluated in a group of 31 normal-hearing, normal-developing children 7 to 12 years of age. A total of 146 Dutch nouns were selected for the experiment after surveying 14 local Dutch-speaking children. Windows-based custom software was developed to administer the behavioral paradigm from a conventional laptop computer. A separate touch screen was used as a response interface to gather the RT data from the participants. Verbal repetition of each presented word was scored by the tester and a percentage-correct word recognition score (WRS) was calculated for each condition. Randomized lists of target words were presented in one of three signal to noise ratios (SNR) to examine the effect of background noise on the two outcome measures of WRS and RT. Three novel categorization tasks, each corresponding to a different depth or elaboration level of semantic processing, were developed to examine the effect of processing level on either WRS or RT. It was hypothesized that, while listening effort as measured by RT would be affected by both noise and processing level, WRS performance would be affected by changes in noise level only. The RT measure was also hypothesized to increase more from an increase in noise level in categorization conditions demanding a deeper or more elaborate form of semantic processing. Results: There was a significant effect of SNR level on school-age children's WRS: their word recognition performance tended to decrease with increasing background noise level. However, depth of processing did not seem to affect WRS. Moreover, a repeated-measure analysis of variance fitted to transformed RT data revealed that this measure of listening effort in normal-hearing school-age children was significantly affected by both SNR level and the depth of semantic processing. There was no significant interaction between noise level and the type of categorization task with regard to RT. Conclusions: The observed patterns of WRS and RT supported the hypotheses regarding the effects of background noise and depth of processing on word recognition performance and a behavioral measure of listening effort. The magnitude of noise-induced change in RT did not differ between categorization tasks, however. Our findings point to future research directions regarding the potential effects of age, working memory capacity, and cross-modality interaction when measuring listening effort in different levels of semantic processing. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2nDlAzb
via IFTTT

Language Outcomes in Deaf or Hard of Hearing Teenagers Who Are Spoken Language Users: Effects of Universal Newborn Hearing Screening and Early Confirmation.

Objectives: This study aimed to examine whether (a) exposure to universal newborn hearing screening (UNHS) and b) early confirmation of hearing loss were associated with benefits to expressive and receptive language outcomes in the teenage years for a cohort of spoken language users. It also aimed to determine whether either of these two variables was associated with benefits to relative language gain from middle childhood to adolescence within this cohort. Design: The participants were drawn from a prospective cohort study of a population sample of children with bilateral permanent childhood hearing loss, who varied in their exposure to UNHS and who had previously had their language skills assessed at 6-10 years. Sixty deaf or hard of hearing teenagers who were spoken language users and a comparison group of 38 teenagers with normal hearing completed standardized measures of their receptive and expressive language ability at 13-19 years. Results: Teenagers exposed to UNHS did not show significantly better expressive (adjusted mean difference, 0.40; 95% confidence interval [CI], -0.26 to 1.05; d = 0.32) or receptive (adjusted mean difference, 0.68; 95% CI, -0.56 to 1.93; d = 0.28) language skills than those who were not. Those who had their hearing loss confirmed by 9 months of age did not show significantly better expressive (adjusted mean difference, 0.43; 95% CI, -0.20 to 1.05; d = 0.35) or receptive (adjusted mean difference, 0.95; 95% CI, -0.22 to 2.11; d = 0.42) language skills than those who had it confirmed later. In all cases, effect sizes were of small size and in favor of those exposed to UNHS or confirmed by 9 months. Subgroup analysis indicated larger beneficial effects of early confirmation for those deaf or hard of hearing teenagers without cochlear implants (N = 48; 80% of the sample), and these benefits were significant in the case of receptive language outcomes (adjusted mean difference, 1.55; 95% CI, 0.38 to 2.71; d = 0.78). Exposure to UNHS did not account for significant unique variance in any of the three language scores at 13-19 years beyond that accounted for by existing language scores at 6-10 years. Early confirmation accounted for significant unique variance in the expressive language information score at 13-19 years after adjusting for the corresponding score at 6-10 years (R2 change = 0.08, p = 0.03). Conclusions: This study found that while adolescent language scores were higher for deaf or hard of hearing teenagers exposed to UNHS and those who had their hearing loss confirmed by 9 months, these group differences were not significant within the whole sample. There was some evidence of a beneficial effect of early confirmation of hearing loss on relative expressive language gain from childhood to adolescence. Further examination of the effect of these variables on adolescent language outcomes in other cohorts would be valuable. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2psI3uW
via IFTTT

Multisensory Integration in Cochlear Implant Recipients.

Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2nDrKiD
via IFTTT

Listening Effort Through Depth of Processing in School-Age Children.

Objectives: A reliable and practical measure of listening effort is crucial in the aural rehabilitation of children with communication disorders. In this article, we propose a novel behavioral paradigm designed to measure listening effort in school-age children based on different depths and levels of verbal processing. The paradigm consists of a classic word recognition task performed in quiet and in noise coupled to one of three additional tasks asking the children to judge the color of simple pictures or a certain semantic category of the presented words. The response time (RT) from the categorization tasks is considered the primary indicator of listening effort. Design: The listening effort paradigm was evaluated in a group of 31 normal-hearing, normal-developing children 7 to 12 years of age. A total of 146 Dutch nouns were selected for the experiment after surveying 14 local Dutch-speaking children. Windows-based custom software was developed to administer the behavioral paradigm from a conventional laptop computer. A separate touch screen was used as a response interface to gather the RT data from the participants. Verbal repetition of each presented word was scored by the tester and a percentage-correct word recognition score (WRS) was calculated for each condition. Randomized lists of target words were presented in one of three signal to noise ratios (SNR) to examine the effect of background noise on the two outcome measures of WRS and RT. Three novel categorization tasks, each corresponding to a different depth or elaboration level of semantic processing, were developed to examine the effect of processing level on either WRS or RT. It was hypothesized that, while listening effort as measured by RT would be affected by both noise and processing level, WRS performance would be affected by changes in noise level only. The RT measure was also hypothesized to increase more from an increase in noise level in categorization conditions demanding a deeper or more elaborate form of semantic processing. Results: There was a significant effect of SNR level on school-age children's WRS: their word recognition performance tended to decrease with increasing background noise level. However, depth of processing did not seem to affect WRS. Moreover, a repeated-measure analysis of variance fitted to transformed RT data revealed that this measure of listening effort in normal-hearing school-age children was significantly affected by both SNR level and the depth of semantic processing. There was no significant interaction between noise level and the type of categorization task with regard to RT. Conclusions: The observed patterns of WRS and RT supported the hypotheses regarding the effects of background noise and depth of processing on word recognition performance and a behavioral measure of listening effort. The magnitude of noise-induced change in RT did not differ between categorization tasks, however. Our findings point to future research directions regarding the potential effects of age, working memory capacity, and cross-modality interaction when measuring listening effort in different levels of semantic processing. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2nDlAzb
via IFTTT