Πέμπτη 19 Μαΐου 2016

Lateralization and Binaural Interaction of Middle-Latency and Late-Brainstem Components of the Auditory Evoked Response

Abstract

We used magnetoencephalography to examine lateralization and binaural interaction of the middle-latency and late-brainstem components of the auditory evoked response (the MLR and SN10, respectively). Click stimuli were presented either monaurally, or binaurally with left- or right-leading interaural time differences (ITDs). While early MLR components, including the N19 and P30, were larger for monaural stimuli presented contralaterally (by approximately 30 and 36 % in the left and right hemispheres, respectively), later components, including the N40 and P50, were larger ipsilaterally. In contrast, MLRs elicited by binaural clicks with left- or right-leading ITDs did not differ. Depending on filter settings, weak binaural interaction could be observed as early as the P13 but was clearly much larger for later components, beginning at the P30, indicating some degree of binaural linearity up to early stages of cortical processing. The SN10, an obscure late-brainstem component, was observed consistently in individuals and showed linear binaural additivity. The results indicate that while the MLR is lateralized in response to monaural stimuli—and not ITDs—this lateralization reverses from primarily contralateral to primarily ipsilateral as early as 40 ms post stimulus and is never as large as that seen with fMRI.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1U2ZjOU
via IFTTT

Voice Self-assessment Protocols: Different Trends Among Organic and Behavioral Dysphonias

Publication date: Available online 19 May 2016
Source:Journal of Voice
Author(s): Mara Behlau, Fabiana Zambon, Felipe Moreti, Gisele Oliveira, Euro de Barros Couto
ObjectivesThis study aimed to correlate the results of five self-assessment instruments for patients with behavioral or organic dysphonia (OD), and to analyze their relationship with listeners' judgments of degree of voice severity and predominant type of voice deviation.Study DesignThis is a cross-sectional prospective study.MethodsA total of 103 patients (77 with behavioral dysphonia, 26 with OD) completed the Brazilian validated versions of five instruments: Voice Handicap Index (VHI), Voice-Related Quality of Life, Vocal Performance Questionnaire, Voice Symptom Scale (VoiSS), and Vocal Tract Discomfort Scale. Voice samples were collected for auditory-perceptual analysis. Correlations were made among protocols, and between these instruments and the perceptual analysis.ResultsNone of the instruments correctly identified 100% of the dysphonic individuals. The VoiSS identified 100 of the 103 subjects. Numerous correlations were found with variable strength. The strongest correlation was between frequency and severity scales of the Vocal Tract Discomfort Scale (r = 0.946) and the total score of the VHI and VoiSS (r = 0.917). Correlations between the instruments and the perceptual analysis achieved only moderate strength; the VHI, the Voice-Related Quality of Life, and the VoiSS showed the highest correlations with counting numbers task, particularly for OD. The predominant type of voice deviation did not influence the score of the protocols.ConclusionsNone of the self-assessment instruments is capable of identifying all cases of dysphonia. However, they are important in assessing the impact of voice problem on quality of life. Patient self-assessment and clinician perceptual evaluation share only moderate correlations, with higher strength for counting numbers task in comparison with sustained vowel.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/25cLVTQ
via IFTTT

Validation and Evaluation of the Effects of Semi-Occluded Face Mask Straw Phonation Therapy Methods on Aerodynamic Parameters in Comparison to Traditional Methods

Publication date: Available online 19 May 2016
Source:Journal of Voice
Author(s): Randal Mills, Cameron Hays, Jehad Al-Ramahi, Jack J. Jiang
Objectives/HypothesisTraditional semi-occluded vocal tract therapies have the benefit of improving vocal economy but, do not allow for connected speech during rehabilitation. In this study, we introduce a semi-occluded face mask (SOFM) as an improvement upon current methods. This novel technique allows for normal speech production, and will make the transition to everyday speech more natural. We hypothesize that use of an SOFM will lead to the same gains in vocal economy seen in traditional methods.Study DesignRepeated measures excised canine larynx bench experiment with each larynx subject to controls and a randomized series of experimental conditions.MethodsAerodynamic data were collected for 30 excised canine larynges. The larynges were subjected to conditions including a control, two tube extensions (15 and 30 cm), and two tube diameters (6.5 and 17 mm) both with and without the SOFM. Results were compared between groups and between conditions within each group.ResultsNo significant differences were found between the phonation threshold pressure and phonation threshold flow measurements obtained with or without the SOFM throughout all extension and constriction levels. Significant differences in phonation threshold pressure and phonation threshold flow were observed when varying the tube diameter while the same comparison for varying the tube length at least trended toward significance.ConclusionsThis study suggests that a SOFM can be used to elicit the same gains in vocal economy as what has been seen with traditional semi-occluded vocal tract methods. Future studies should test this novel technique in human subjects to validate its use in a clinical setting.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/20cER2J
via IFTTT

Tinnitus Relief From Japan 2014

Tinnitus, or ringing in the ears, can come in varying degrees. There are sufferers who experience the constant annoyance of a ringing that comes and goes. But there are also individuals whose symptoms can be both distracting and painful, and can even lead to hearing loss.

But anyone dealing with this condition take comfort in knowing there is encouraging news regarding tinnitus relief from Japan 2014.

A Few Facts About Tinnitus

The medical community does not consider tinnitus a serious or life-threatening condition. It may explain why there is no significant research on curing tinnitus. There is research that shows one in five people have experienced tinnitus with almost 90 percent of those subjects showing evidence of hearing loss. While studies like the tinnitus relief from Japan 2014 have been noted for providing greater management of the condition, procedures in the U.S. consists mostly of suppressing the ringing as opposed to stopping it.

Reevaluating Treating Tinnitus

Thanks to a study that supports tinnitus relief from Japan 2014, the medical community is beginning to see that a more individualized approach to treating tinnitus is needed. Studies out of the East have demonstrated more integrated approaches and have shown results that silence the ringing. The treatment consisted of stress reduction, sound therapy, relaxation techniques and sleep management, with noticeable success.

The U.S. Joins Tinnitus Relief from Japan 2014

Bouncing off research coming out of the East, scientists and doctors in the U.S. and Japan have collaborated on developing treatment that could effectively end tinnitus instead of masking it. The new research looks at how the death of cells in the inner ear can actually increase the severity of tinnitus. This revelation led to investigating the possible effects of regenerating cells in the ear to help normalize hearing. This form of gene therapy may lead to a cure for tinnitus sufferers.

There has also been significant research coming out of the Oregon Health & Science University and the Veterans Affairs Portland Medical Center that advocates transcranial magnetic stimulation. This is a safe, non-invasive procedure on the brain that alters the activity of neurons with no recorded side effects. Clinical trails have shown positive results in 18 out of 32 participants.

End of day, whether promising treatments come out of tinnitus relief from Japan 2014 or from new research conducted right here in the U.S., it may not be too long before sufferers will have the relief they’ve craved for so long.




from #Audiology via xlomafota13 on Inoreader http://ift.tt/20cCY6f
via IFTTT

Loss of glycine receptors containing the α3 subunit compromises auditory nerve activity, but not outer hair cell function

S03785955.gif

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Julia Dlugaiczyk, Dietmar Hecker, Christian Neubert, Stefanie Buerbank, Dario Campanelli, Cord-Michael Becker, Heinrich Betz, Marlies Knipper, Lukas Rüttiger, Bernhard Schick
Inhibitory glycine receptors containing the α3 subunit (GlyRα3) regulate sensory information processing in the CNS and retina. In previous work, we demonstrated the presence of postsynaptic GlyR alpha3 immunoreactivity at efferent synapses of the medial and lateral olivocochlear bundle in the organ of Corti; however, the role of these alpha3-GlyRs in auditory signalling has remained elusive. The present study analyzes distortion-product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) of knockout mice with a targeted inactivation of the Glra3 gene (Glra3-/-) and their wildtype littermates (Glra3+/+) before and seven days after acoustic trauma (AT; 4 to 16 kHz, 120 dB SPL, 1 h).Before AT, DPOAE thresholds were slightly, but significantly lower, and DPOAE amplitudes were slightly larger in Glra3-/- as compared to Glra3+/+ mice. While click- and f-ABR thresholds were similar in both genotypes before AT, threshold-normalized click-ABR wave I amplitudes were smaller in Glra3-/- mice as compared to their wildtype littermates. Following AT, both the decrement of ABR wave I amplitudes and the delay of wave I latencies were more pronounced in Glra3-/- than Glra3+/+ mice. Accordingly, correlation between early click-evoked ABR signals (0 to 2.5 ms from stimulus onset) before and after AT was significantly reduced for Glra3-/- as compared to Glra3+/+ mice. In summary, these results show that loss of α3-GlyRs compromises suprathreshold auditory nerve activity, but not outer hair cell function.



from #Audiology via ola Kala on Inoreader http://ift.tt/1W4OcLR
via IFTTT

Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a CI simulation

S03785955.gif

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Hannah E. Staisloff, Daniel H. Lee, Justin M. Aronoff
For bilateral cochlear implant users, the left and right arrays are typically not physically aligned, resulting in a degradation of binaural fusion, which can be detrimental to binaural abilities. Perceptually aligning the two arrays can be accomplished by disabling electrodes in one ear that do not have a perceptually corresponding electrode in the other side. However, disabling electrodes at the edges of the array will cause compression of the input frequency range into a smaller cochlear extent, which may result in reduced spectral resolution. An alternative approach to overcome this mismatch would be to only align one edge of the array. By aligning either only the apical or basal end of the arrays, fewer electrodes would be disabled, potentially causing less reduction in spectral resolution. The goal of this study was to determine the relative effect of aligning either the basal or apical end of the electrode with regards to binaural fusion. A vocoder was used to simulate cochlear implant listening conditions in normal hearing listeners. Speech signals were vocoded such that the two ears were either predominantly aligned at only the basal or apical end of the simulated arrays. The experiment was then repeated with a spectrally inverted vocoder to determine whether the detrimental effects on fusion were related to the spectral-temporal characteristics of the stimuli or the location in the cochlea where the misalignment occurred. In Experiment 1, aligning the basal portion of the simulated arrays led to significantly less binaural fusion than aligning the apical portions of the simulated array. However, when the input was spectrally inverted, aligning the apical portion of the simulated array led to significantly less binaural fusion than aligning the basal portions of the simulated arrays. These results suggest that, for speech, with its predominantly low frequency spectral-temporal modulations, it is more important to perceptually align the apical portion of the array to better preserve binaural fusion. By partially aligning these arrays, cochlear implant users could potentially increase their ability to fuse speech sounds presented to the two ears while maximizing spectral resolution.



from #Audiology via ola Kala on Inoreader http://ift.tt/20bQsit
via IFTTT

Loss of glycine receptors containing the α3 subunit compromises auditory nerve activity, but not outer hair cell function

S03785955.gif

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Julia Dlugaiczyk, Dietmar Hecker, Christian Neubert, Stefanie Buerbank, Dario Campanelli, Cord-Michael Becker, Heinrich Betz, Marlies Knipper, Lukas Rüttiger, Bernhard Schick
Inhibitory glycine receptors containing the α3 subunit (GlyRα3) regulate sensory information processing in the CNS and retina. In previous work, we demonstrated the presence of postsynaptic GlyR alpha3 immunoreactivity at efferent synapses of the medial and lateral olivocochlear bundle in the organ of Corti; however, the role of these alpha3-GlyRs in auditory signalling has remained elusive. The present study analyzes distortion-product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) of knockout mice with a targeted inactivation of the Glra3 gene (Glra3-/-) and their wildtype littermates (Glra3+/+) before and seven days after acoustic trauma (AT; 4 to 16 kHz, 120 dB SPL, 1 h).Before AT, DPOAE thresholds were slightly, but significantly lower, and DPOAE amplitudes were slightly larger in Glra3-/- as compared to Glra3+/+ mice. While click- and f-ABR thresholds were similar in both genotypes before AT, threshold-normalized click-ABR wave I amplitudes were smaller in Glra3-/- mice as compared to their wildtype littermates. Following AT, both the decrement of ABR wave I amplitudes and the delay of wave I latencies were more pronounced in Glra3-/- than Glra3+/+ mice. Accordingly, correlation between early click-evoked ABR signals (0 to 2.5 ms from stimulus onset) before and after AT was significantly reduced for Glra3-/- as compared to Glra3+/+ mice. In summary, these results show that loss of α3-GlyRs compromises suprathreshold auditory nerve activity, but not outer hair cell function.



from #Audiology via ola Kala on Inoreader http://ift.tt/1W4OcLR
via IFTTT

Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a CI simulation

S03785955.gif

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Hannah E. Staisloff, Daniel H. Lee, Justin M. Aronoff
For bilateral cochlear implant users, the left and right arrays are typically not physically aligned, resulting in a degradation of binaural fusion, which can be detrimental to binaural abilities. Perceptually aligning the two arrays can be accomplished by disabling electrodes in one ear that do not have a perceptually corresponding electrode in the other side. However, disabling electrodes at the edges of the array will cause compression of the input frequency range into a smaller cochlear extent, which may result in reduced spectral resolution. An alternative approach to overcome this mismatch would be to only align one edge of the array. By aligning either only the apical or basal end of the arrays, fewer electrodes would be disabled, potentially causing less reduction in spectral resolution. The goal of this study was to determine the relative effect of aligning either the basal or apical end of the electrode with regards to binaural fusion. A vocoder was used to simulate cochlear implant listening conditions in normal hearing listeners. Speech signals were vocoded such that the two ears were either predominantly aligned at only the basal or apical end of the simulated arrays. The experiment was then repeated with a spectrally inverted vocoder to determine whether the detrimental effects on fusion were related to the spectral-temporal characteristics of the stimuli or the location in the cochlea where the misalignment occurred. In Experiment 1, aligning the basal portion of the simulated arrays led to significantly less binaural fusion than aligning the apical portions of the simulated array. However, when the input was spectrally inverted, aligning the apical portion of the simulated array led to significantly less binaural fusion than aligning the basal portions of the simulated arrays. These results suggest that, for speech, with its predominantly low frequency spectral-temporal modulations, it is more important to perceptually align the apical portion of the array to better preserve binaural fusion. By partially aligning these arrays, cochlear implant users could potentially increase their ability to fuse speech sounds presented to the two ears while maximizing spectral resolution.



from #Audiology via ola Kala on Inoreader http://ift.tt/20bQsit
via IFTTT

Loss of glycine receptors containing the α3 subunit compromises auditory nerve activity, but not outer hair cell function

S03785955.gif

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Julia Dlugaiczyk, Dietmar Hecker, Christian Neubert, Stefanie Buerbank, Dario Campanelli, Cord-Michael Becker, Heinrich Betz, Marlies Knipper, Lukas Rüttiger, Bernhard Schick
Inhibitory glycine receptors containing the α3 subunit (GlyRα3) regulate sensory information processing in the CNS and retina. In previous work, we demonstrated the presence of postsynaptic GlyR alpha3 immunoreactivity at efferent synapses of the medial and lateral olivocochlear bundle in the organ of Corti; however, the role of these alpha3-GlyRs in auditory signalling has remained elusive. The present study analyzes distortion-product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) of knockout mice with a targeted inactivation of the Glra3 gene (Glra3-/-) and their wildtype littermates (Glra3+/+) before and seven days after acoustic trauma (AT; 4 to 16 kHz, 120 dB SPL, 1 h).Before AT, DPOAE thresholds were slightly, but significantly lower, and DPOAE amplitudes were slightly larger in Glra3-/- as compared to Glra3+/+ mice. While click- and f-ABR thresholds were similar in both genotypes before AT, threshold-normalized click-ABR wave I amplitudes were smaller in Glra3-/- mice as compared to their wildtype littermates. Following AT, both the decrement of ABR wave I amplitudes and the delay of wave I latencies were more pronounced in Glra3-/- than Glra3+/+ mice. Accordingly, correlation between early click-evoked ABR signals (0 to 2.5 ms from stimulus onset) before and after AT was significantly reduced for Glra3-/- as compared to Glra3+/+ mice. In summary, these results show that loss of α3-GlyRs compromises suprathreshold auditory nerve activity, but not outer hair cell function.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1W4OcLR
via IFTTT

Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a CI simulation

S03785955.gif

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Hannah E. Staisloff, Daniel H. Lee, Justin M. Aronoff
For bilateral cochlear implant users, the left and right arrays are typically not physically aligned, resulting in a degradation of binaural fusion, which can be detrimental to binaural abilities. Perceptually aligning the two arrays can be accomplished by disabling electrodes in one ear that do not have a perceptually corresponding electrode in the other side. However, disabling electrodes at the edges of the array will cause compression of the input frequency range into a smaller cochlear extent, which may result in reduced spectral resolution. An alternative approach to overcome this mismatch would be to only align one edge of the array. By aligning either only the apical or basal end of the arrays, fewer electrodes would be disabled, potentially causing less reduction in spectral resolution. The goal of this study was to determine the relative effect of aligning either the basal or apical end of the electrode with regards to binaural fusion. A vocoder was used to simulate cochlear implant listening conditions in normal hearing listeners. Speech signals were vocoded such that the two ears were either predominantly aligned at only the basal or apical end of the simulated arrays. The experiment was then repeated with a spectrally inverted vocoder to determine whether the detrimental effects on fusion were related to the spectral-temporal characteristics of the stimuli or the location in the cochlea where the misalignment occurred. In Experiment 1, aligning the basal portion of the simulated arrays led to significantly less binaural fusion than aligning the apical portions of the simulated array. However, when the input was spectrally inverted, aligning the apical portion of the simulated array led to significantly less binaural fusion than aligning the basal portions of the simulated arrays. These results suggest that, for speech, with its predominantly low frequency spectral-temporal modulations, it is more important to perceptually align the apical portion of the array to better preserve binaural fusion. By partially aligning these arrays, cochlear implant users could potentially increase their ability to fuse speech sounds presented to the two ears while maximizing spectral resolution.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/20bQsit
via IFTTT

Loss of glycine receptors containing the α3 subunit compromises auditory nerve activity, but not outer hair cell function

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Julia Dlugaiczyk, Dietmar Hecker, Christian Neubert, Stefanie Buerbank, Dario Campanelli, Cord-Michael Becker, Heinrich Betz, Marlies Knipper, Lukas Rüttiger, Bernhard Schick
Inhibitory glycine receptors containing the α3 subunit (GlyRα3) regulate sensory information processing in the CNS and retina. In previous work, we demonstrated the presence of postsynaptic GlyR alpha3 immunoreactivity at efferent synapses of the medial and lateral olivocochlear bundle in the organ of Corti; however, the role of these alpha3-GlyRs in auditory signalling has remained elusive. The present study analyzes distortion-product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) of knockout mice with a targeted inactivation of the Glra3 gene (Glra3-/-) and their wildtype littermates (Glra3+/+) before and seven days after acoustic trauma (AT; 4 to 16 kHz, 120 dB SPL, 1 h).Before AT, DPOAE thresholds were slightly, but significantly lower, and DPOAE amplitudes were slightly larger in Glra3-/- as compared to Glra3+/+ mice. While click- and f-ABR thresholds were similar in both genotypes before AT, threshold-normalized click-ABR wave I amplitudes were smaller in Glra3-/- mice as compared to their wildtype littermates. Following AT, both the decrement of ABR wave I amplitudes and the delay of wave I latencies were more pronounced in Glra3-/- than Glra3+/+ mice. Accordingly, correlation between early click-evoked ABR signals (0 to 2.5 ms from stimulus onset) before and after AT was significantly reduced for Glra3-/- as compared to Glra3+/+ mice. In summary, these results show that loss of α3-GlyRs compromises suprathreshold auditory nerve activity, but not outer hair cell function.



from #Audiology via ola Kala on Inoreader http://ift.tt/1W4OcLR
via IFTTT

Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a CI simulation

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Hannah E. Staisloff, Daniel H. Lee, Justin M. Aronoff
For bilateral cochlear implant users, the left and right arrays are typically not physically aligned, resulting in a degradation of binaural fusion, which can be detrimental to binaural abilities. Perceptually aligning the two arrays can be accomplished by disabling electrodes in one ear that do not have a perceptually corresponding electrode in the other side. However, disabling electrodes at the edges of the array will cause compression of the input frequency range into a smaller cochlear extent, which may result in reduced spectral resolution. An alternative approach to overcome this mismatch would be to only align one edge of the array. By aligning either only the apical or basal end of the arrays, fewer electrodes would be disabled, potentially causing less reduction in spectral resolution. The goal of this study was to determine the relative effect of aligning either the basal or apical end of the electrode with regards to binaural fusion. A vocoder was used to simulate cochlear implant listening conditions in normal hearing listeners. Speech signals were vocoded such that the two ears were either predominantly aligned at only the basal or apical end of the simulated arrays. The experiment was then repeated with a spectrally inverted vocoder to determine whether the detrimental effects on fusion were related to the spectral-temporal characteristics of the stimuli or the location in the cochlea where the misalignment occurred. In Experiment 1, aligning the basal portion of the simulated arrays led to significantly less binaural fusion than aligning the apical portions of the simulated array. However, when the input was spectrally inverted, aligning the apical portion of the simulated array led to significantly less binaural fusion than aligning the basal portions of the simulated arrays. These results suggest that, for speech, with its predominantly low frequency spectral-temporal modulations, it is more important to perceptually align the apical portion of the array to better preserve binaural fusion. By partially aligning these arrays, cochlear implant users could potentially increase their ability to fuse speech sounds presented to the two ears while maximizing spectral resolution.



from #Audiology via ola Kala on Inoreader http://ift.tt/20bQsit
via IFTTT

Loss of glycine receptors containing the α3 subunit compromises auditory nerve activity, but not outer hair cell function

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Julia Dlugaiczyk, Dietmar Hecker, Christian Neubert, Stefanie Buerbank, Dario Campanelli, Cord-Michael Becker, Heinrich Betz, Marlies Knipper, Lukas Rüttiger, Bernhard Schick
Inhibitory glycine receptors containing the α3 subunit (GlyRα3) regulate sensory information processing in the CNS and retina. In previous work, we demonstrated the presence of postsynaptic GlyR alpha3 immunoreactivity at efferent synapses of the medial and lateral olivocochlear bundle in the organ of Corti; however, the role of these alpha3-GlyRs in auditory signalling has remained elusive. The present study analyzes distortion-product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) of knockout mice with a targeted inactivation of the Glra3 gene (Glra3-/-) and their wildtype littermates (Glra3+/+) before and seven days after acoustic trauma (AT; 4 to 16 kHz, 120 dB SPL, 1 h).Before AT, DPOAE thresholds were slightly, but significantly lower, and DPOAE amplitudes were slightly larger in Glra3-/- as compared to Glra3+/+ mice. While click- and f-ABR thresholds were similar in both genotypes before AT, threshold-normalized click-ABR wave I amplitudes were smaller in Glra3-/- mice as compared to their wildtype littermates. Following AT, both the decrement of ABR wave I amplitudes and the delay of wave I latencies were more pronounced in Glra3-/- than Glra3+/+ mice. Accordingly, correlation between early click-evoked ABR signals (0 to 2.5 ms from stimulus onset) before and after AT was significantly reduced for Glra3-/- as compared to Glra3+/+ mice. In summary, these results show that loss of α3-GlyRs compromises suprathreshold auditory nerve activity, but not outer hair cell function.



from #Audiology via ola Kala on Inoreader http://ift.tt/1W4OcLR
via IFTTT

Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a CI simulation

Publication date: Available online 18 May 2016
Source:Hearing Research
Author(s): Hannah E. Staisloff, Daniel H. Lee, Justin M. Aronoff
For bilateral cochlear implant users, the left and right arrays are typically not physically aligned, resulting in a degradation of binaural fusion, which can be detrimental to binaural abilities. Perceptually aligning the two arrays can be accomplished by disabling electrodes in one ear that do not have a perceptually corresponding electrode in the other side. However, disabling electrodes at the edges of the array will cause compression of the input frequency range into a smaller cochlear extent, which may result in reduced spectral resolution. An alternative approach to overcome this mismatch would be to only align one edge of the array. By aligning either only the apical or basal end of the arrays, fewer electrodes would be disabled, potentially causing less reduction in spectral resolution. The goal of this study was to determine the relative effect of aligning either the basal or apical end of the electrode with regards to binaural fusion. A vocoder was used to simulate cochlear implant listening conditions in normal hearing listeners. Speech signals were vocoded such that the two ears were either predominantly aligned at only the basal or apical end of the simulated arrays. The experiment was then repeated with a spectrally inverted vocoder to determine whether the detrimental effects on fusion were related to the spectral-temporal characteristics of the stimuli or the location in the cochlea where the misalignment occurred. In Experiment 1, aligning the basal portion of the simulated arrays led to significantly less binaural fusion than aligning the apical portions of the simulated array. However, when the input was spectrally inverted, aligning the apical portion of the simulated array led to significantly less binaural fusion than aligning the basal portions of the simulated arrays. These results suggest that, for speech, with its predominantly low frequency spectral-temporal modulations, it is more important to perceptually align the apical portion of the array to better preserve binaural fusion. By partially aligning these arrays, cochlear implant users could potentially increase their ability to fuse speech sounds presented to the two ears while maximizing spectral resolution.



from #Audiology via ola Kala on Inoreader http://ift.tt/20bQsit
via IFTTT

Validating self-reporting of hearing-related symptoms against pure-tone audiometry, otoacoustic emission, and speech audiometry

10.1080/14992027.2016.1177210<br/>Sofie Fredriksson

from #Audiology via ola Kala on Inoreader http://ift.tt/1qwLuRp
via IFTTT

Validating self-reporting of hearing-related symptoms against pure-tone audiometry, otoacoustic emission, and speech audiometry

10.1080/14992027.2016.1177210<br/>Sofie Fredriksson

from #Audiology via ola Kala on Inoreader http://ift.tt/1qwLuRp
via IFTTT

Validating self-reporting of hearing-related symptoms against pure-tone audiometry, otoacoustic emission, and speech audiometry

10.1080/14992027.2016.1177210<br/>Sofie Fredriksson

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1qwLuRp
via IFTTT

Validating self-reporting of hearing-related symptoms against pure-tone audiometry, otoacoustic emission, and speech audiometry

10.1080/14992027.2016.1177210<br/>Sofie Fredriksson

from #Audiology via ola Kala on Inoreader http://ift.tt/23YB9dZ
via IFTTT

Validating self-reporting of hearing-related symptoms against pure-tone audiometry, otoacoustic emission, and speech audiometry

10.1080/14992027.2016.1177210<br/>Sofie Fredriksson

from #Audiology via ola Kala on Inoreader http://ift.tt/23YB9dZ
via IFTTT

Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods

cm_sbs_024_plain.png

The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/27DK21q
via IFTTT

ACOUSTICAL STANDARDS NEWS

American National Standards (ANSI Standards) developed by Accredited Standards Committees S1, S2, S3, S3/SC 1, and S12 in the areas of acoustics, mechanical vibration and shock, bioacoustics, animal bioacoustics, and noise, respectively, are published by the Acoustical Society of America (ASA).

Comments are welcomed on all material in Acoustical Standards News.

This Acoustical Standards News section in JASA, as well as the national catalog of Acoustical Standards, and other information on the Standards Program of the Acoustical Society of America, are available via the ASA home page: http://ift.tt/1rNqYG4.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TpJC7a
via IFTTT

Effects of speech style, room acoustics, and vocal fatigue on vocal effort

cm_sbs_024_plain.png

Vocal effort is a physiological measure that accounts for changes in voice production as vocal loading increases. It has been quantified in terms of sound pressure level (SPL). This study investigates how vocal effort is affected by speaking style, room acoustics, and short-term vocal fatigue. Twenty subjects were recorded while reading a text at normal and loud volumes in anechoic, semi-reverberant, and reverberant rooms in the presence of classroom babble noise. The acoustics in each environment were modified by creating a strong first reflection in the talker position. After each task, the subjects answered questions addressing their perception of the vocal effort, comfort, control, and clarity of their own voice. Variation in SPL for each subject was measured per task. It was found that SPL and self-reported effort increased in the loud style and decreased when the reflective panels were present and when reverberation time increased. Self-reported comfort and control decreased in the loud style, while self-reported clarity increased when panels were present. The lowest magnitude of vocal fatigue was experienced in the semi-reverberant room. The results indicate that early reflections may be used to reduce vocal effort without modifying reverberation time.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TpJbtM
via IFTTT

Sperm whale codas may encode individuality as well as clan identity

cm_sbs_024_plain.png

Sperm whales produce codas for communication that can be grouped into different types according to their temporal patterns. Codas have led researchers to propose that sperm whales belong to distinct cultural clans, but it is presently unclear if they also convey individual information. Coda clicks comprise a series of pulses and the delay between pulses is a function of organ size, and therefore body size, and so is one potential source of individual information. Another potential individual-specific parameter could be the inter-click intervals within codas. To test whether these parameters provide reliable individual cues, stereo-hydrophone acoustic tags (Dtags) were attached to five sperm whales of the Azores, recording a total of 802 codas. A discriminant function analysis was used to distinguish 288 5 Regular codas from four of the sperm whales and 183 3 Regular codas from two sperm whales. The results suggest that codas have consistent individual features in their inter-click intervals and inter-pulse intervals which may contribute to individual identification. Additionally, two whales produced different coda types in distinct foraging dive phases. Codas may therefore be used by sperm whales to convey information of identity as well as activity within a social group to a larger extent than previously assumed.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/27DKiNV
via IFTTT

Influence of lips on the production of vowels based on finite element simulations and experiments

cm_sbs_024_plain.png

Three-dimensional (3-D) numerical approaches for voice production are currently being investigated and developed. Radiation losses produced when sound waves emanate from the mouth aperture are one of the key aspects to be modeled. When doing so, the lips are usually removed from the vocal tract geometry in order to impose a radiation impedance on a closed cross-section, which speeds up the numerical simulations compared to free-field radiation solutions. However, lips may play a significant role. In this work, the lips' effects on vowel sounds are investigated by using 3-D vocal tract geometries generated from magnetic resonance imaging. To this aim, two configurations for the vocal tract exit are considered: with lips and without lips. The acoustic behavior of each is analyzed and compared by means of time-domain finite element simulations that allow free-field wave propagation and experiments performed using 3-D-printed mechanical replicas. The results show that the lips should be included in order to correctly model vocal tract acoustics not only at high frequencies, as commonly accepted, but also in the low frequency range below 4 kHz, where plane wave propagation occurs.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TpJhBy
via IFTTT

Cover Image, Volume 170A, Number 6, June 2016.

Cover Image, Volume 170A, Number 6, June 2016.

Am J Med Genet A. 2016 Jun;170(6):i

Authors: Bayat A, Fijalkowski I, Andersen T, Abdulmunem SA, van den Ende J, Van Hul W

Abstract
The cover image, by Wim Van Hul et al., is based on the Original Article Further delineation of facioaudiosymphalangism syndrome: Description of a family with a novel NOG mutation and without hearing loss, DOI: 10.1002/ajmg.a.37626.

PMID: 27191530 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1U1rhuk
via IFTTT

The Expanding Role of Audiology Telepractice



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaG777
via IFTTT

A Phoneme Perception Test Method for High-Frequency Hearing Aid Fitting



from #Audiology via ola Kala on Inoreader http://ift.tt/209uAV1
via IFTTT

The Effect of the Arabic Computer Rehabilitation Program “Rannan” on Sound Detection and Discrimination in Children with Cochlear Implants



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaG0IW
via IFTTT

Recognition of Speech from the Television with Use of a Wireless Technology Designed for Cochlear Implants



from #Audiology via ola Kala on Inoreader http://ift.tt/209uFIh
via IFTTT

Assessment of Functional Hearing in Greek-Speaking Children Diagnosed with Central Auditory Processing Disorder



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaGaA0
via IFTTT

The Expanding Role of Audiology Telepractice



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VaG777
via IFTTT

A Phoneme Perception Test Method for High-Frequency Hearing Aid Fitting



from #Audiology via xlomafota13 on Inoreader http://ift.tt/209uAV1
via IFTTT

The Effect of the Arabic Computer Rehabilitation Program “Rannan” on Sound Detection and Discrimination in Children with Cochlear Implants



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VaG0IW
via IFTTT

Recognition of Speech from the Television with Use of a Wireless Technology Designed for Cochlear Implants



from #Audiology via xlomafota13 on Inoreader http://ift.tt/209uFIh
via IFTTT

Assessment of Functional Hearing in Greek-Speaking Children Diagnosed with Central Auditory Processing Disorder



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VaGaA0
via IFTTT

Cortical Auditory-Evoked Potentials in Response to Multitone Stimuli in Hearing-Impaired Adults



from #Audiology via xlomafota13 on Inoreader http://ift.tt/209uzjV
via IFTTT

Validation of the Home Hearing Test™



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VaFZVo
via IFTTT

The Expanding Role of Audiology Telepractice



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaG777
via IFTTT

Cortical Auditory-Evoked Potentials in Response to Multitone Stimuli in Hearing-Impaired Adults



from #Audiology via ola Kala on Inoreader http://ift.tt/209uzjV
via IFTTT

A Phoneme Perception Test Method for High-Frequency Hearing Aid Fitting



from #Audiology via ola Kala on Inoreader http://ift.tt/209uAV1
via IFTTT

Validation of the Home Hearing Test™



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaFZVo
via IFTTT

The Effect of the Arabic Computer Rehabilitation Program “Rannan” on Sound Detection and Discrimination in Children with Cochlear Implants



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaG0IW
via IFTTT

Recognition of Speech from the Television with Use of a Wireless Technology Designed for Cochlear Implants



from #Audiology via ola Kala on Inoreader http://ift.tt/209uFIh
via IFTTT

Assessment of Functional Hearing in Greek-Speaking Children Diagnosed with Central Auditory Processing Disorder



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaGaA0
via IFTTT

Cortical Auditory-Evoked Potentials in Response to Multitone Stimuli in Hearing-Impaired Adults



from #Audiology via ola Kala on Inoreader http://ift.tt/209uzjV
via IFTTT

Validation of the Home Hearing Test™



from #Audiology via ola Kala on Inoreader http://ift.tt/1VaFZVo
via IFTTT