Τρίτη 16 Ιανουαρίου 2018

New Experimental Device Promises Tinnitus Relief

Millions of Americans suffer from tinnitus, which is defined by researchers at the Kresge Hearing Research Institute of the Department of Otolaryngology at the University of Michigan as the phantom perception of sound in the absence of external stimuli. About 2 million are incapacitated by the negative impacts of tinnitus. People with tinnitus experience varying severity of discomfort—some individuals are minimally disturbed while others sufferer sleep disturbances, poor concentration, depression, and anxiety. Fortunately, a recently published study entitled "Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans" reported that there is a promising technology that may help sufferers reduce their tinnitus.

Here's a technical explanation in the study: Tinnitus is believed to result from the impairment of physiological regulatory mechanism of neural synchrony from the dorsal cochlear nucleus (DNC) to the neural ensembles along the auditory pathway. The DNC is where the initial multisensory integration of neural inputs from the auditory nerve, auditory midbrain, auditory cortex, trigeminal and cervical ganglia, spinal trigeminal nucleus, and dorsal column nuclei happens. Research in animals reveals that increased instinctive cross-neural activity of DCN's output neurons, the fusiform cells, results in behavioral evidence of tinnitus.

University of Michigan Medical School professor Susan Shore, PhD, the lead researcher of the study, has a less technical answer. In an article, Shore said that the specific the region of the brainstem, the DNC, is the root of tinnitus. "When the main neurons in this region, called fusiform cells, become hyperactive and synchronize with one another, the phantom signal is transmitted into other centers where perception occurs," she explained.

"If we can stop these signals, we can stop tinnitus," said Shore. "That is what our approach attempts to do, and we're encouraged by these initial parallel results in animals and humans."

The research, which studied fusiform cells and their role in tinnitus perception, used a dual-stimulus approach called targeted bimodal auditory-somatosensory stimulation to incite long-term depression (LTD) in the cochlear nucleus and to beneficially reset the activity of the fusiform cells. The experimental approach was delivered to guinea pigs for 25 days. The same bimodal treatment was administered to 20 human subjects for 28 days. The results are encouraging. The study concludes that bimodal auditory-somatosensory stimulation may suppress chronic tinnitus in patients.

​While the tinnitus treatment offered by this research is promising, it remains to be experimental and commercially unavailable. The human test subjects were limited to those sufferers capable of temporarily altering their symptoms through clenching of jaws, sticking out of tongues, or turning/flexing of necks. There will be another clinical trial this year.

Published: 1/16/2018 3:16:00 PM


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2D9lNxB
via IFTTT

New Experimental Device Promises Tinnitus Relief

Millions of Americans suffer from tinnitus, which is defined by researchers at the Kresge Hearing Research Institute of the Department of Otolaryngology at the University of Michigan as the phantom perception of sound in the absence of external stimuli. About 2 million are incapacitated by the negative impacts of tinnitus. People with tinnitus experience varying severity of discomfort—some individuals are minimally disturbed while others sufferer sleep disturbances, poor concentration, depression, and anxiety. Fortunately, a recently published study entitled "Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans" reported that there is a promising technology that may help sufferers reduce their tinnitus.

Here's a technical explanation in the study: Tinnitus is believed to result from the impairment of physiological regulatory mechanism of neural synchrony from the dorsal cochlear nucleus (DNC) to the neural ensembles along the auditory pathway. The DNC is where the initial multisensory integration of neural inputs from the auditory nerve, auditory midbrain, auditory cortex, trigeminal and cervical ganglia, spinal trigeminal nucleus, and dorsal column nuclei happens. Research in animals reveals that increased instinctive cross-neural activity of DCN's output neurons, the fusiform cells, results in behavioral evidence of tinnitus.

University of Michigan Medical School professor Susan Shore, PhD, the lead researcher of the study, has a less technical answer. In an article, Shore said that the specific the region of the brainstem, the DNC, is the root of tinnitus. "When the main neurons in this region, called fusiform cells, become hyperactive and synchronize with one another, the phantom signal is transmitted into other centers where perception occurs," she explained.

"If we can stop these signals, we can stop tinnitus," said Shore. "That is what our approach attempts to do, and we're encouraged by these initial parallel results in animals and humans."

The research, which studied fusiform cells and their role in tinnitus perception, used a dual-stimulus approach called targeted bimodal auditory-somatosensory stimulation to incite long-term depression (LTD) in the cochlear nucleus and to beneficially reset the activity of the fusiform cells. The experimental approach was delivered to guinea pigs for 25 days. The same bimodal treatment was administered to 20 human subjects for 28 days. The results are encouraging. The study concludes that bimodal auditory-somatosensory stimulation may suppress chronic tinnitus in patients.

​While the tinnitus treatment offered by this research is promising, it remains to be experimental and commercially unavailable. The human test subjects were limited to those sufferers capable of temporarily altering their symptoms through clenching of jaws, sticking out of tongues, or turning/flexing of necks. There will be another clinical trial this year.

Published: 1/16/2018 3:16:00 PM


from #Audiology via ola Kala on Inoreader http://ift.tt/2D9lNxB
via IFTTT

New Experimental Device Promises Tinnitus Relief

Millions of Americans suffer from tinnitus, which is defined by researchers at the Kresge Hearing Research Institute of the Department of Otolaryngology at the University of Michigan as the phantom perception of sound in the absence of external stimuli. About 2 million are incapacitated by the negative impacts of tinnitus. People with tinnitus experience varying severity of discomfort—some individuals are minimally disturbed while others sufferer sleep disturbances, poor concentration, depression, and anxiety. Fortunately, a recently published study entitled "Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans" reported that there is a promising technology that may help sufferers reduce their tinnitus.

Here's a technical explanation in the study: Tinnitus is believed to result from the impairment of physiological regulatory mechanism of neural synchrony from the dorsal cochlear nucleus (DNC) to the neural ensembles along the auditory pathway. The DNC is where the initial multisensory integration of neural inputs from the auditory nerve, auditory midbrain, auditory cortex, trigeminal and cervical ganglia, spinal trigeminal nucleus, and dorsal column nuclei happens. Research in animals reveals that increased instinctive cross-neural activity of DCN's output neurons, the fusiform cells, results in behavioral evidence of tinnitus.

University of Michigan Medical School professor Susan Shore, PhD, the lead researcher of the study, has a less technical answer. In an article, Shore said that the specific the region of the brainstem, the DNC, is the root of tinnitus. "When the main neurons in this region, called fusiform cells, become hyperactive and synchronize with one another, the phantom signal is transmitted into other centers where perception occurs," she explained.

"If we can stop these signals, we can stop tinnitus," said Shore. "That is what our approach attempts to do, and we're encouraged by these initial parallel results in animals and humans."

The research, which studied fusiform cells and their role in tinnitus perception, used a dual-stimulus approach called targeted bimodal auditory-somatosensory stimulation to incite long-term depression (LTD) in the cochlear nucleus and to beneficially reset the activity of the fusiform cells. The experimental approach was delivered to guinea pigs for 25 days. The same bimodal treatment was administered to 20 human subjects for 28 days. The results are encouraging. The study concludes that bimodal auditory-somatosensory stimulation may suppress chronic tinnitus in patients.

​While the tinnitus treatment offered by this research is promising, it remains to be experimental and commercially unavailable. The human test subjects were limited to those sufferers capable of temporarily altering their symptoms through clenching of jaws, sticking out of tongues, or turning/flexing of necks. There will be another clinical trial this year.

Published: 1/16/2018 3:16:00 PM


from #Audiology via ola Kala on Inoreader http://ift.tt/2D9lNxB
via IFTTT

Labyrinthine Sequestrum: A Case Report and Review of the Literature

Objective: To report the presentation, diagnosis, management, and convalescence of labyrinthine sequestrum (LS) and summarize all previously published cases. Patient(s): Eleven-year-old female with LS. Intervention(s): Multidisciplinary diagnostic evaluation and treatment. Main Outcome Measures: Imaging and laboratory findings, medical and surgical treatment. Results: We describe a case of LS secondary to medically recalcitrant suppurative otitis media in an 11-year-old female and review all eight previously reported cases. The index patient presented after 6 months of otitis media, profound unilateral hearing loss, with symptoms suggesting meningitis. Temporal bone CT demonstrated marked bony destruction of the left otic capsule. Gadolinium-enhanced MRI showed an enhancing process with evidence of meningitis and subdural empyema. The patient was treated with surgical debridement and culture directed antibiotic therapy. Posttreatment imaging showed resolution of intracranial infection with fibrous bony healing of the otic capsule resembling fibrous dysplasia. Conclusion: LS is a rare form of labyrinthitis characterized by centrifugal destruction of the otic capsule. The current index case highlights the importance of combined medical and surgical treatment and describes for the first time in the literature the fibrous ossification of the otic capsule following disease resolution. Address correspondence and reprint requests to Julie B. Guerin, M.D., Department of Diagnostic Radiology, Mayo Clinic, 200 First Street SW, Rochester, MN 55905; E-mail: guerin.julie@mayo.edu Financial support: This work was not funded by any agency or grant. The authors disclose no conflicts of interest. Copyright © 2018 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FHYY5T
via IFTTT

Using Thresholds in Noise to Identify Hidden Hearing Loss in Humans

Objectives: Recent animal studies suggest that noise-induced synaptopathy may underlie a phenomenon that has been labeled hidden hearing loss (HHL). Noise exposure preferentially damages low spontaneous-rate auditory nerve fibers, which are involved in the processing of moderate- to high-level sounds and are more resistant to masking by background noise. Therefore, the effect of synaptopathy may be more evident in suprathreshold measures of auditory function, especially in the presence of background noise. The purpose of this study was to develop a statistical model for estimating HHL in humans using thresholds in noise as the outcome variable and measures that reflect the integrity of sites along the auditory pathway as explanatory variables. Our working hypothesis is that HHL is evident in the portion of the variance observed in thresholds in noise that is not dependent on thresholds in quiet, because this residual variance retains statistical dependence on other measures of suprathreshold function. Design: Study participants included 13 adults with normal hearing (≤15 dB HL) and 20 adults with normal hearing at 1 kHz and sensorineural hearing loss at 4 kHz (>15 dB HL). Thresholds in noise were measured, and the residual of the correlation between thresholds in noise and thresholds in quiet, which we refer to as thresholds-in-noise residual, was used as the outcome measure for the model. Explanatory measures were as follows: (1) auditory brainstem response (ABR) waves I and V amplitudes; (2) electrocochleographic action potential and summating potential amplitudes; (3) distortion product otoacoustic emissions level; and (4) categorical loudness scaling. All measurements were made at two frequencies (1 and 4 kHz). ABR and electrocochleographic measurements were made at 80 and 100 dB peak equivalent sound pressure level, while wider ranges of levels were tested during distortion product otoacoustic emission and categorical loudness scaling measurements. A model relating the thresholds-in-noise residual and the explanatory measures was created using multiple linear regression analysis. Results: Predictions of thresholds-in-noise residual using the model accounted for 61% (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2DhdyUl
via IFTTT

Children’s Recognition of Emotional Prosody in Spectrally Degraded Speech Is Predicted by Their Age and Cognitive Status

Objectives: It is known that school-aged children with cochlear implants show deficits in voice emotion recognition relative to normal-hearing peers. Little, however, is known about normal-hearing children’s processing of emotional cues in cochlear implant–simulated, spectrally degraded speech. The objective of this study was to investigate school-aged, normal-hearing children’s recognition of voice emotion, and the degree to which their performance could be predicted by their age, vocabulary, and cognitive factors such as nonverbal intelligence and executive function. Design: Normal-hearing children (6–19 years old) and young adults were tested on a voice emotion recognition task under three different conditions of spectral degradation using cochlear implant simulations (full-spectrum, 16-channel, and 8-channel noise-vocoded speech). Measures of vocabulary, nonverbal intelligence, and executive function were obtained as well. Results: Adults outperformed children on all tasks, and a strong developmental effect was observed. The children’s age, the degree of spectral resolution, and nonverbal intelligence were predictors of performance, but vocabulary and executive functions were not, and no interactions were observed between age and spectral resolution. Conclusions: These results indicate that cognitive function and age play important roles in children’s ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children’s age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution. ACKNOWLEDGMENTS: The authors thank the child participants and their families for their support of our work. Brooke Burianek, Devan Ridenoure, Shauntelle Cannon, and Sara Damm helped with data collection and data entry. Sophie Ambrose and Ryan McCreery provided valuable input on the cognitive and executive function tests. The authors thank the Emily Shannon Fu Foundation for the use of the experimental software used in this study. Anna R. Tinnemore is currently at the Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA. This research was supported by National Institutes of Health (NIH) grants R01 DC014233 and R21 DC011905 (to M.C.) and the Human Subjects Recruitment Core of NIH P30 DC004662. A.R.T. and D.J.Z. were supported by National Institutes of Health (NIH) T35 DC008757. Address for correspondence: Monita Chatterjee, Boys Town National Research Hospital, 555 North 30th Street, Omaha, NE 68131, USA. E-mail: monita.chatterjee@boystown.org Received September 4, 2017; accepted November 24, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2B5zU5e
via IFTTT

Tinnitus and Auditory Perception After a History of Noise Exposure: Relationship to Auditory Brainstem Response Measures

wk-health-logo.gif

Objectives: To determine whether auditory brainstem response (ABR) wave I amplitude is associated with measures of auditory perception in young people with normal distortion product otoacoustic emissions (DPOAEs) and varying levels of noise exposure history. Design: Tinnitus, loudness tolerance, and speech perception ability were measured in 31 young military Veterans and 43 non-Veterans (19 to 35 years of age) with normal pure-tone thresholds and DPOAEs. Speech perception was evaluated in quiet using Northwestern University Auditory Test (NU-6) word lists and in background noise using the words in noise (WIN) test. Loudness discomfort levels were measured using 1-, 3-, 4-, and 6-kHz pulsed pure tones. DPOAEs and ABRs were collected in each participant to assess outer hair cell and auditory nerve function. Results: The probability of reporting tinnitus in this sample increased by a factor of 2.0 per 0.1 µV decrease in ABR wave I amplitude (95% Bayesian confidence interval, 1.1 to 5.0) for males and by a factor of 2.2 (95% confidence interval, 1.0 to 6.4) for females after adjusting for sex and DPOAE levels. Similar results were obtained in an alternate model adjusted for pure-tone thresholds in addition to sex and DPOAE levels. No apparent relationship was found between wave I amplitude and either loudness tolerance or speech perception in quiet or noise. Conclusions: Reduced ABR wave I amplitude was associated with an increased risk of tinnitus, even after adjusting for DPOAEs and sex. In contrast, wave III and V amplitudes had little effect on tinnitus risk. This suggests that changes in peripheral input at the level of the inner hair cell or auditory nerve may lead to increases in central gain that give rise to the perception of tinnitus. Although the extent of synaptopathy in the study participants cannot be measured directly, these findings are consistent with the prediction that tinnitus may be a perceptual consequence of cochlear synaptopathy. ACKNOWLEDGMENTS: The authors thank Drs. Brad Buran and Charlie Liberman for their helpful comments on the study and article. This research was supported by the Department of Veterans Affairs, Veterans Health Administration, Rehabilitation Research and Development Service: Award No. C1484-M (to N.F.B.) and C9230-C [to National Center for Rehabilitative Auditory Research (NCRAR)]. N.F.B. designed and performed the experiments, analyzed the data, and wrote the article. D.K.-M. aided in the design of the experiments and provided critical revision. G.P.M. provided statistical analysis and critical revision. The opinions and assertions presented are private views of the authors and are not to be construed as official or as necessarily reflecting the views of the Veterans Administration (VA) or the Department of Defense. The authors have no conflicts of interest to disclose. Address for correspondence: Naomi F. Bramhall, VA RR&D National Center for Rehabilitative Auditory Research (NCRAR), 3710 SW US Veterans Hospital Road, P5-NCRAR, Portland, OR 97239, USA. E-mail: naomi.bramhall@va.gov Received April 28, 2017; accepted November 16, 2017. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DfmdXk
via IFTTT

The Effect of Simulated Interaural Frequency Mismatch on Speech Understanding and Spatial Release From Masking

Objective: The binaural-hearing system interaurally compares inputs, which underlies the ability to localize sound sources and to better understand speech in complex acoustic environments. Cochlear implants (CIs) are provided in both ears to increase binaural-hearing benefits; however, bilateral CI users continue to struggle with understanding speech in the presence of interfering sounds and do not achieve the same level of spatial release from masking (SRM) as normal-hearing listeners. One reason for diminished SRM in CI users could be that the electrode arrays are inserted at different depths in each ear, which would cause an interaural frequency mismatch. Because interaural frequency mismatch diminishes the salience of interaural differences for relatively simple stimuli, it may also diminish binaural benefits for spectral-temporally complex stimuli like speech. This study evaluated the effect of simulated frequency-to-place mismatch on speech understanding and SRM. Design: Eleven normal-hearing listeners were tested on a speech understanding task. There was a female target talker who spoke five-word sentences from a closed set of words. There were two interfering male talkers who spoke unrelated sentences. Nonindividualized head-related transfer functions were used to simulate a virtual auditory space. The target was presented from the front (0°), and the interfering speech was either presented from the front (colocated) or from 90° to the right (spatially separated). Stimuli were then processed by an eight-channel vocoder with tonal carriers to simulate aspects of listening through a CI. Frequency-to-place mismatch (“shift”) was introduced by increasing the center frequency of the synthesis filters compared with the corresponding analysis filters. Speech understanding was measured for different shifts (0, 3, 4.5, and 6 mm) and target-to-masker ratios (TMRs: +10 to −10 dB). SRM was calculated as the difference in the percentage of correct words for the colocated and separated conditions. Two types of shifts were tested: (1) bilateral shifts that had the same frequency-to-place mismatch in both ears, but no interaural frequency mismatch, and (2) unilateral shifts that produced an interaural frequency mismatch. Results: For the bilateral shift conditions, speech understanding decreased with increasing shift and with decreasing TMR, for both colocated and separate conditions. There was, however, no interaction between shift and spatial configuration; in other words, SRM was not affected by shift. For the unilateral shift conditions, speech understanding decreased with increasing interaural mismatch and with decreasing TMR for both the colocated and spatially separated conditions. Critically, there was a significant interaction between the amount of shift and spatial configuration; in other words, SRM decreased for increasing interaural mismatch. Conclusions: A frequency-to-place mismatch in one or both ears resulted in decreased speech understanding. SRM, however, was only affected in conditions with unilateral shifts and interaural frequency mismatch. Therefore, matching frequency information between the ears provides listeners with larger binaural-hearing benefits, for example, improved speech understanding in the presence of interfering talkers. A clinical procedure to reduce interaural frequency mismatch when programming bilateral CIs may improve benefits in speech segregation that are due to binaural-hearing abilities. ACKNOWLEDGMENTS: The authors thank Katelyn Depolis who helped to collect data. This study was supported by National Institutes of Health (NIH) Grant R01-DC015798 (to M.J.G. and Joshua G. W. Bernstein), R03-DC015321 (to A.K.), and R01-DC003083 (to R.Y.L.) and was supported, in part, by NIH Grant P30-HD03352 (Waisman Center core grant). The word corpus was funded by NIH Grant P30-DC04663 (Boston University Hearing Research Center core grant). The authors have no conflicts of interest to disclose. Address for correspondence: Matthew J. Goupell, Department of Hearing and Speech Sciences, University of Maryland, 0119E Lefrak Hall, College Park, MD 20742, USA. E-mail: goupell@umd.edu Received January 2, 2017; accepted November 15, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2B68Hza
via IFTTT

Using Thresholds in Noise to Identify Hidden Hearing Loss in Humans

Objectives: Recent animal studies suggest that noise-induced synaptopathy may underlie a phenomenon that has been labeled hidden hearing loss (HHL). Noise exposure preferentially damages low spontaneous-rate auditory nerve fibers, which are involved in the processing of moderate- to high-level sounds and are more resistant to masking by background noise. Therefore, the effect of synaptopathy may be more evident in suprathreshold measures of auditory function, especially in the presence of background noise. The purpose of this study was to develop a statistical model for estimating HHL in humans using thresholds in noise as the outcome variable and measures that reflect the integrity of sites along the auditory pathway as explanatory variables. Our working hypothesis is that HHL is evident in the portion of the variance observed in thresholds in noise that is not dependent on thresholds in quiet, because this residual variance retains statistical dependence on other measures of suprathreshold function. Design: Study participants included 13 adults with normal hearing (≤15 dB HL) and 20 adults with normal hearing at 1 kHz and sensorineural hearing loss at 4 kHz (>15 dB HL). Thresholds in noise were measured, and the residual of the correlation between thresholds in noise and thresholds in quiet, which we refer to as thresholds-in-noise residual, was used as the outcome measure for the model. Explanatory measures were as follows: (1) auditory brainstem response (ABR) waves I and V amplitudes; (2) electrocochleographic action potential and summating potential amplitudes; (3) distortion product otoacoustic emissions level; and (4) categorical loudness scaling. All measurements were made at two frequencies (1 and 4 kHz). ABR and electrocochleographic measurements were made at 80 and 100 dB peak equivalent sound pressure level, while wider ranges of levels were tested during distortion product otoacoustic emission and categorical loudness scaling measurements. A model relating the thresholds-in-noise residual and the explanatory measures was created using multiple linear regression analysis. Results: Predictions of thresholds-in-noise residual using the model accounted for 61% (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2DhdyUl
via IFTTT

Children’s Recognition of Emotional Prosody in Spectrally Degraded Speech Is Predicted by Their Age and Cognitive Status

Objectives: It is known that school-aged children with cochlear implants show deficits in voice emotion recognition relative to normal-hearing peers. Little, however, is known about normal-hearing children’s processing of emotional cues in cochlear implant–simulated, spectrally degraded speech. The objective of this study was to investigate school-aged, normal-hearing children’s recognition of voice emotion, and the degree to which their performance could be predicted by their age, vocabulary, and cognitive factors such as nonverbal intelligence and executive function. Design: Normal-hearing children (6–19 years old) and young adults were tested on a voice emotion recognition task under three different conditions of spectral degradation using cochlear implant simulations (full-spectrum, 16-channel, and 8-channel noise-vocoded speech). Measures of vocabulary, nonverbal intelligence, and executive function were obtained as well. Results: Adults outperformed children on all tasks, and a strong developmental effect was observed. The children’s age, the degree of spectral resolution, and nonverbal intelligence were predictors of performance, but vocabulary and executive functions were not, and no interactions were observed between age and spectral resolution. Conclusions: These results indicate that cognitive function and age play important roles in children’s ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children’s age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution. ACKNOWLEDGMENTS: The authors thank the child participants and their families for their support of our work. Brooke Burianek, Devan Ridenoure, Shauntelle Cannon, and Sara Damm helped with data collection and data entry. Sophie Ambrose and Ryan McCreery provided valuable input on the cognitive and executive function tests. The authors thank the Emily Shannon Fu Foundation for the use of the experimental software used in this study. Anna R. Tinnemore is currently at the Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA. This research was supported by National Institutes of Health (NIH) grants R01 DC014233 and R21 DC011905 (to M.C.) and the Human Subjects Recruitment Core of NIH P30 DC004662. A.R.T. and D.J.Z. were supported by National Institutes of Health (NIH) T35 DC008757. Address for correspondence: Monita Chatterjee, Boys Town National Research Hospital, 555 North 30th Street, Omaha, NE 68131, USA. E-mail: monita.chatterjee@boystown.org Received September 4, 2017; accepted November 24, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2B5zU5e
via IFTTT

Tinnitus and Auditory Perception After a History of Noise Exposure: Relationship to Auditory Brainstem Response Measures

wk-health-logo.gif

Objectives: To determine whether auditory brainstem response (ABR) wave I amplitude is associated with measures of auditory perception in young people with normal distortion product otoacoustic emissions (DPOAEs) and varying levels of noise exposure history. Design: Tinnitus, loudness tolerance, and speech perception ability were measured in 31 young military Veterans and 43 non-Veterans (19 to 35 years of age) with normal pure-tone thresholds and DPOAEs. Speech perception was evaluated in quiet using Northwestern University Auditory Test (NU-6) word lists and in background noise using the words in noise (WIN) test. Loudness discomfort levels were measured using 1-, 3-, 4-, and 6-kHz pulsed pure tones. DPOAEs and ABRs were collected in each participant to assess outer hair cell and auditory nerve function. Results: The probability of reporting tinnitus in this sample increased by a factor of 2.0 per 0.1 µV decrease in ABR wave I amplitude (95% Bayesian confidence interval, 1.1 to 5.0) for males and by a factor of 2.2 (95% confidence interval, 1.0 to 6.4) for females after adjusting for sex and DPOAE levels. Similar results were obtained in an alternate model adjusted for pure-tone thresholds in addition to sex and DPOAE levels. No apparent relationship was found between wave I amplitude and either loudness tolerance or speech perception in quiet or noise. Conclusions: Reduced ABR wave I amplitude was associated with an increased risk of tinnitus, even after adjusting for DPOAEs and sex. In contrast, wave III and V amplitudes had little effect on tinnitus risk. This suggests that changes in peripheral input at the level of the inner hair cell or auditory nerve may lead to increases in central gain that give rise to the perception of tinnitus. Although the extent of synaptopathy in the study participants cannot be measured directly, these findings are consistent with the prediction that tinnitus may be a perceptual consequence of cochlear synaptopathy. ACKNOWLEDGMENTS: The authors thank Drs. Brad Buran and Charlie Liberman for their helpful comments on the study and article. This research was supported by the Department of Veterans Affairs, Veterans Health Administration, Rehabilitation Research and Development Service: Award No. C1484-M (to N.F.B.) and C9230-C [to National Center for Rehabilitative Auditory Research (NCRAR)]. N.F.B. designed and performed the experiments, analyzed the data, and wrote the article. D.K.-M. aided in the design of the experiments and provided critical revision. G.P.M. provided statistical analysis and critical revision. The opinions and assertions presented are private views of the authors and are not to be construed as official or as necessarily reflecting the views of the Veterans Administration (VA) or the Department of Defense. The authors have no conflicts of interest to disclose. Address for correspondence: Naomi F. Bramhall, VA RR&D National Center for Rehabilitative Auditory Research (NCRAR), 3710 SW US Veterans Hospital Road, P5-NCRAR, Portland, OR 97239, USA. E-mail: naomi.bramhall@va.gov Received April 28, 2017; accepted November 16, 2017. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DfmdXk
via IFTTT

The Effect of Simulated Interaural Frequency Mismatch on Speech Understanding and Spatial Release From Masking

Objective: The binaural-hearing system interaurally compares inputs, which underlies the ability to localize sound sources and to better understand speech in complex acoustic environments. Cochlear implants (CIs) are provided in both ears to increase binaural-hearing benefits; however, bilateral CI users continue to struggle with understanding speech in the presence of interfering sounds and do not achieve the same level of spatial release from masking (SRM) as normal-hearing listeners. One reason for diminished SRM in CI users could be that the electrode arrays are inserted at different depths in each ear, which would cause an interaural frequency mismatch. Because interaural frequency mismatch diminishes the salience of interaural differences for relatively simple stimuli, it may also diminish binaural benefits for spectral-temporally complex stimuli like speech. This study evaluated the effect of simulated frequency-to-place mismatch on speech understanding and SRM. Design: Eleven normal-hearing listeners were tested on a speech understanding task. There was a female target talker who spoke five-word sentences from a closed set of words. There were two interfering male talkers who spoke unrelated sentences. Nonindividualized head-related transfer functions were used to simulate a virtual auditory space. The target was presented from the front (0°), and the interfering speech was either presented from the front (colocated) or from 90° to the right (spatially separated). Stimuli were then processed by an eight-channel vocoder with tonal carriers to simulate aspects of listening through a CI. Frequency-to-place mismatch (“shift”) was introduced by increasing the center frequency of the synthesis filters compared with the corresponding analysis filters. Speech understanding was measured for different shifts (0, 3, 4.5, and 6 mm) and target-to-masker ratios (TMRs: +10 to −10 dB). SRM was calculated as the difference in the percentage of correct words for the colocated and separated conditions. Two types of shifts were tested: (1) bilateral shifts that had the same frequency-to-place mismatch in both ears, but no interaural frequency mismatch, and (2) unilateral shifts that produced an interaural frequency mismatch. Results: For the bilateral shift conditions, speech understanding decreased with increasing shift and with decreasing TMR, for both colocated and separate conditions. There was, however, no interaction between shift and spatial configuration; in other words, SRM was not affected by shift. For the unilateral shift conditions, speech understanding decreased with increasing interaural mismatch and with decreasing TMR for both the colocated and spatially separated conditions. Critically, there was a significant interaction between the amount of shift and spatial configuration; in other words, SRM decreased for increasing interaural mismatch. Conclusions: A frequency-to-place mismatch in one or both ears resulted in decreased speech understanding. SRM, however, was only affected in conditions with unilateral shifts and interaural frequency mismatch. Therefore, matching frequency information between the ears provides listeners with larger binaural-hearing benefits, for example, improved speech understanding in the presence of interfering talkers. A clinical procedure to reduce interaural frequency mismatch when programming bilateral CIs may improve benefits in speech segregation that are due to binaural-hearing abilities. ACKNOWLEDGMENTS: The authors thank Katelyn Depolis who helped to collect data. This study was supported by National Institutes of Health (NIH) Grant R01-DC015798 (to M.J.G. and Joshua G. W. Bernstein), R03-DC015321 (to A.K.), and R01-DC003083 (to R.Y.L.) and was supported, in part, by NIH Grant P30-HD03352 (Waisman Center core grant). The word corpus was funded by NIH Grant P30-DC04663 (Boston University Hearing Research Center core grant). The authors have no conflicts of interest to disclose. Address for correspondence: Matthew J. Goupell, Department of Hearing and Speech Sciences, University of Maryland, 0119E Lefrak Hall, College Park, MD 20742, USA. E-mail: goupell@umd.edu Received January 2, 2017; accepted November 15, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2B68Hza
via IFTTT

Using Thresholds in Noise to Identify Hidden Hearing Loss in Humans

Objectives: Recent animal studies suggest that noise-induced synaptopathy may underlie a phenomenon that has been labeled hidden hearing loss (HHL). Noise exposure preferentially damages low spontaneous-rate auditory nerve fibers, which are involved in the processing of moderate- to high-level sounds and are more resistant to masking by background noise. Therefore, the effect of synaptopathy may be more evident in suprathreshold measures of auditory function, especially in the presence of background noise. The purpose of this study was to develop a statistical model for estimating HHL in humans using thresholds in noise as the outcome variable and measures that reflect the integrity of sites along the auditory pathway as explanatory variables. Our working hypothesis is that HHL is evident in the portion of the variance observed in thresholds in noise that is not dependent on thresholds in quiet, because this residual variance retains statistical dependence on other measures of suprathreshold function. Design: Study participants included 13 adults with normal hearing (≤15 dB HL) and 20 adults with normal hearing at 1 kHz and sensorineural hearing loss at 4 kHz (>15 dB HL). Thresholds in noise were measured, and the residual of the correlation between thresholds in noise and thresholds in quiet, which we refer to as thresholds-in-noise residual, was used as the outcome measure for the model. Explanatory measures were as follows: (1) auditory brainstem response (ABR) waves I and V amplitudes; (2) electrocochleographic action potential and summating potential amplitudes; (3) distortion product otoacoustic emissions level; and (4) categorical loudness scaling. All measurements were made at two frequencies (1 and 4 kHz). ABR and electrocochleographic measurements were made at 80 and 100 dB peak equivalent sound pressure level, while wider ranges of levels were tested during distortion product otoacoustic emission and categorical loudness scaling measurements. A model relating the thresholds-in-noise residual and the explanatory measures was created using multiple linear regression analysis. Results: Predictions of thresholds-in-noise residual using the model accounted for 61% (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DhdyUl
via IFTTT

Children’s Recognition of Emotional Prosody in Spectrally Degraded Speech Is Predicted by Their Age and Cognitive Status

Objectives: It is known that school-aged children with cochlear implants show deficits in voice emotion recognition relative to normal-hearing peers. Little, however, is known about normal-hearing children’s processing of emotional cues in cochlear implant–simulated, spectrally degraded speech. The objective of this study was to investigate school-aged, normal-hearing children’s recognition of voice emotion, and the degree to which their performance could be predicted by their age, vocabulary, and cognitive factors such as nonverbal intelligence and executive function. Design: Normal-hearing children (6–19 years old) and young adults were tested on a voice emotion recognition task under three different conditions of spectral degradation using cochlear implant simulations (full-spectrum, 16-channel, and 8-channel noise-vocoded speech). Measures of vocabulary, nonverbal intelligence, and executive function were obtained as well. Results: Adults outperformed children on all tasks, and a strong developmental effect was observed. The children’s age, the degree of spectral resolution, and nonverbal intelligence were predictors of performance, but vocabulary and executive functions were not, and no interactions were observed between age and spectral resolution. Conclusions: These results indicate that cognitive function and age play important roles in children’s ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children’s age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution. ACKNOWLEDGMENTS: The authors thank the child participants and their families for their support of our work. Brooke Burianek, Devan Ridenoure, Shauntelle Cannon, and Sara Damm helped with data collection and data entry. Sophie Ambrose and Ryan McCreery provided valuable input on the cognitive and executive function tests. The authors thank the Emily Shannon Fu Foundation for the use of the experimental software used in this study. Anna R. Tinnemore is currently at the Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA. This research was supported by National Institutes of Health (NIH) grants R01 DC014233 and R21 DC011905 (to M.C.) and the Human Subjects Recruitment Core of NIH P30 DC004662. A.R.T. and D.J.Z. were supported by National Institutes of Health (NIH) T35 DC008757. Address for correspondence: Monita Chatterjee, Boys Town National Research Hospital, 555 North 30th Street, Omaha, NE 68131, USA. E-mail: monita.chatterjee@boystown.org Received September 4, 2017; accepted November 24, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2B5zU5e
via IFTTT

Tinnitus and Auditory Perception After a History of Noise Exposure: Relationship to Auditory Brainstem Response Measures

Objectives: To determine whether auditory brainstem response (ABR) wave I amplitude is associated with measures of auditory perception in young people with normal distortion product otoacoustic emissions (DPOAEs) and varying levels of noise exposure history. Design: Tinnitus, loudness tolerance, and speech perception ability were measured in 31 young military Veterans and 43 non-Veterans (19 to 35 years of age) with normal pure-tone thresholds and DPOAEs. Speech perception was evaluated in quiet using Northwestern University Auditory Test (NU-6) word lists and in background noise using the words in noise (WIN) test. Loudness discomfort levels were measured using 1-, 3-, 4-, and 6-kHz pulsed pure tones. DPOAEs and ABRs were collected in each participant to assess outer hair cell and auditory nerve function. Results: The probability of reporting tinnitus in this sample increased by a factor of 2.0 per 0.1 µV decrease in ABR wave I amplitude (95% Bayesian confidence interval, 1.1 to 5.0) for males and by a factor of 2.2 (95% confidence interval, 1.0 to 6.4) for females after adjusting for sex and DPOAE levels. Similar results were obtained in an alternate model adjusted for pure-tone thresholds in addition to sex and DPOAE levels. No apparent relationship was found between wave I amplitude and either loudness tolerance or speech perception in quiet or noise. Conclusions: Reduced ABR wave I amplitude was associated with an increased risk of tinnitus, even after adjusting for DPOAEs and sex. In contrast, wave III and V amplitudes had little effect on tinnitus risk. This suggests that changes in peripheral input at the level of the inner hair cell or auditory nerve may lead to increases in central gain that give rise to the perception of tinnitus. Although the extent of synaptopathy in the study participants cannot be measured directly, these findings are consistent with the prediction that tinnitus may be a perceptual consequence of cochlear synaptopathy. ACKNOWLEDGMENTS: The authors thank Drs. Brad Buran and Charlie Liberman for their helpful comments on the study and article. This research was supported by the Department of Veterans Affairs, Veterans Health Administration, Rehabilitation Research and Development Service: Award No. C1484-M (to N.F.B.) and C9230-C [to National Center for Rehabilitative Auditory Research (NCRAR)]. N.F.B. designed and performed the experiments, analyzed the data, and wrote the article. D.K.-M. aided in the design of the experiments and provided critical revision. G.P.M. provided statistical analysis and critical revision. The opinions and assertions presented are private views of the authors and are not to be construed as official or as necessarily reflecting the views of the Veterans Administration (VA) or the Department of Defense. The authors have no conflicts of interest to disclose. Address for correspondence: Naomi F. Bramhall, VA RR&D National Center for Rehabilitative Auditory Research (NCRAR), 3710 SW US Veterans Hospital Road, P5-NCRAR, Portland, OR 97239, USA. E-mail: naomi.bramhall@va.gov Received April 28, 2017; accepted November 16, 2017. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DfmdXk
via IFTTT

The Effect of Simulated Interaural Frequency Mismatch on Speech Understanding and Spatial Release From Masking

Objective: The binaural-hearing system interaurally compares inputs, which underlies the ability to localize sound sources and to better understand speech in complex acoustic environments. Cochlear implants (CIs) are provided in both ears to increase binaural-hearing benefits; however, bilateral CI users continue to struggle with understanding speech in the presence of interfering sounds and do not achieve the same level of spatial release from masking (SRM) as normal-hearing listeners. One reason for diminished SRM in CI users could be that the electrode arrays are inserted at different depths in each ear, which would cause an interaural frequency mismatch. Because interaural frequency mismatch diminishes the salience of interaural differences for relatively simple stimuli, it may also diminish binaural benefits for spectral-temporally complex stimuli like speech. This study evaluated the effect of simulated frequency-to-place mismatch on speech understanding and SRM. Design: Eleven normal-hearing listeners were tested on a speech understanding task. There was a female target talker who spoke five-word sentences from a closed set of words. There were two interfering male talkers who spoke unrelated sentences. Nonindividualized head-related transfer functions were used to simulate a virtual auditory space. The target was presented from the front (0°), and the interfering speech was either presented from the front (colocated) or from 90° to the right (spatially separated). Stimuli were then processed by an eight-channel vocoder with tonal carriers to simulate aspects of listening through a CI. Frequency-to-place mismatch (“shift”) was introduced by increasing the center frequency of the synthesis filters compared with the corresponding analysis filters. Speech understanding was measured for different shifts (0, 3, 4.5, and 6 mm) and target-to-masker ratios (TMRs: +10 to −10 dB). SRM was calculated as the difference in the percentage of correct words for the colocated and separated conditions. Two types of shifts were tested: (1) bilateral shifts that had the same frequency-to-place mismatch in both ears, but no interaural frequency mismatch, and (2) unilateral shifts that produced an interaural frequency mismatch. Results: For the bilateral shift conditions, speech understanding decreased with increasing shift and with decreasing TMR, for both colocated and separate conditions. There was, however, no interaction between shift and spatial configuration; in other words, SRM was not affected by shift. For the unilateral shift conditions, speech understanding decreased with increasing interaural mismatch and with decreasing TMR for both the colocated and spatially separated conditions. Critically, there was a significant interaction between the amount of shift and spatial configuration; in other words, SRM decreased for increasing interaural mismatch. Conclusions: A frequency-to-place mismatch in one or both ears resulted in decreased speech understanding. SRM, however, was only affected in conditions with unilateral shifts and interaural frequency mismatch. Therefore, matching frequency information between the ears provides listeners with larger binaural-hearing benefits, for example, improved speech understanding in the presence of interfering talkers. A clinical procedure to reduce interaural frequency mismatch when programming bilateral CIs may improve benefits in speech segregation that are due to binaural-hearing abilities. ACKNOWLEDGMENTS: The authors thank Katelyn Depolis who helped to collect data. This study was supported by National Institutes of Health (NIH) Grant R01-DC015798 (to M.J.G. and Joshua G. W. Bernstein), R03-DC015321 (to A.K.), and R01-DC003083 (to R.Y.L.) and was supported, in part, by NIH Grant P30-HD03352 (Waisman Center core grant). The word corpus was funded by NIH Grant P30-DC04663 (Boston University Hearing Research Center core grant). The authors have no conflicts of interest to disclose. Address for correspondence: Matthew J. Goupell, Department of Hearing and Speech Sciences, University of Maryland, 0119E Lefrak Hall, College Park, MD 20742, USA. E-mail: goupell@umd.edu Received January 2, 2017; accepted November 15, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2B68Hza
via IFTTT

Grason‐Stadler Launches GSI Novus

novus.JPGGrason‐Stadler released the new AABR/OAE screener, the GSI Novus™, a hand-held, comprehensive newborn hearing screening instrument. The Novus features a touchscreen display and intuitive software in a compact hardware design. The Novus may be configured with any combination of AABR, TEOAE, and DPOAE which allows for seamless two stage infant screening.

The Novus uses a fast rate ABR algorithm with the CE-Chirp stimulus. The CE-Chirp has been proven to produce Wave V responses that are 1.5 to 2 times larger than traditional ABR stimuli making CE-Chirp ideal for newborn screening. With larger responses, test times are reduced and more infants can be screened every day. Distortion Product Otoacoustic Emissions (DPOAE) and Transient Otoacoustic Emissions (TEOAE) protocols add the flexibility required for efficiency in newborn screening.

HearSIM data management software compliments the GSI Novus and offers everything required to manage your newborn screening program. Load patient names into the Novus or quickly determine which patients need additional testing with the intuitive database view. In addition to viewing, storing, and printing test results, it is possible to export data to XML or Hi-Track. Device settings such as screener names, security, and risk factors may be configured from HearSIM.

Published: 1/16/2018 8:35:00 AM


from #Audiology via ola Kala on Inoreader http://ift.tt/2B5C6ts
via IFTTT

Grason‐Stadler Launches GSI Novus

novus.JPGGrason‐Stadler released the new AABR/OAE screener, the GSI Novus™, a hand-held, comprehensive newborn hearing screening instrument. The Novus features a touchscreen display and intuitive software in a compact hardware design. The Novus may be configured with any combination of AABR, TEOAE, and DPOAE which allows for seamless two stage infant screening.

The Novus uses a fast rate ABR algorithm with the CE-Chirp stimulus. The CE-Chirp has been proven to produce Wave V responses that are 1.5 to 2 times larger than traditional ABR stimuli making CE-Chirp ideal for newborn screening. With larger responses, test times are reduced and more infants can be screened every day. Distortion Product Otoacoustic Emissions (DPOAE) and Transient Otoacoustic Emissions (TEOAE) protocols add the flexibility required for efficiency in newborn screening.

HearSIM data management software compliments the GSI Novus and offers everything required to manage your newborn screening program. Load patient names into the Novus or quickly determine which patients need additional testing with the intuitive database view. In addition to viewing, storing, and printing test results, it is possible to export data to XML or Hi-Track. Device settings such as screener names, security, and risk factors may be configured from HearSIM.

Published: 1/16/2018 8:35:00 AM


from #Audiology via ola Kala on Inoreader http://ift.tt/2B5C6ts
via IFTTT

Grason‐Stadler Launches GSI Novus

novus.JPGGrason‐Stadler released the new AABR/OAE screener, the GSI Novus™, a hand-held, comprehensive newborn hearing screening instrument. The Novus features a touchscreen display and intuitive software in a compact hardware design. The Novus may be configured with any combination of AABR, TEOAE, and DPOAE which allows for seamless two stage infant screening.

The Novus uses a fast rate ABR algorithm with the CE-Chirp stimulus. The CE-Chirp has been proven to produce Wave V responses that are 1.5 to 2 times larger than traditional ABR stimuli making CE-Chirp ideal for newborn screening. With larger responses, test times are reduced and more infants can be screened every day. Distortion Product Otoacoustic Emissions (DPOAE) and Transient Otoacoustic Emissions (TEOAE) protocols add the flexibility required for efficiency in newborn screening.

HearSIM data management software compliments the GSI Novus and offers everything required to manage your newborn screening program. Load patient names into the Novus or quickly determine which patients need additional testing with the intuitive database view. In addition to viewing, storing, and printing test results, it is possible to export data to XML or Hi-Track. Device settings such as screener names, security, and risk factors may be configured from HearSIM.

Published: 1/16/2018 8:35:00 AM


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2B5C6ts
via IFTTT

The experience of hearing loss: journey through aural rehabilitation.

The experience of hearing loss: journey through aural rehabilitation.

Int J Audiol. 2018 Jan 15;:1

Authors: Lind C

PMID: 29334296 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2mJiSFF
via IFTTT

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Int J Audiol. 2018 Jan 15;:1-6

Authors: Ramkumar V, Vanaja CS, Hall JW, Selvakumar K, Nagarajan R

Abstract
OBJECTIVE: This study assessed the validity of DPOAE screening conducted by village health workers (VHWs) in a rural community. Real-time click evoked tele-auditory brainstem response (tele-ABR) was used as the gold standard to establish validity.
DESIGN: A cross-sectional design was utilised to compare the results of screening by VHWs to those obtained via tele-ABR. Study samples: One hundred and nineteen subjects (0 to 5 years) were selected randomly from a sample of 2880 infants and young children who received DPOAE screening by VHWs.
METHOD: Real time tele-ABR was conducted by using satellite or broadband internet connectivity at the village. An audiologist located at the tertiary care hospital conducted tele-ABR testing through a remote computing paradigm. Tele-ABR was recorded using standard recording parameters recommended for infants and young children. Wave morphology, repeatability and peak latency data were used for ABR analysis.
RESULTS: Tele-ABR and DPOAE findings were compared for 197 ears. The sensitivity of DPOAE screening conducted by the VHW was 75%, and specificity was 91%. The negative and positive predictive values were 98.8% and 27.2%, respectively.
CONCLUSIONS: The validity of DPOAE screening conducted by trained VHW was acceptable. This study supports the engagement of grass-root workers in community-based hearing health care provision.

PMID: 29334277 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2D8AdOD
via IFTTT

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Int J Audiol. 2018 Jan 15;:1-10

Authors: Dingemanse JG, Vroegop JL, Goedegebure A

Abstract
OBJECTIVE: To evaluate the validity and efficacy of a transient noise reduction algorithm (TNR) in cochlear implant processing and the interaction of TNR with a continuous noise reduction algorithm (CNR).
DESIGN: We studied the effects of TNR and CNR on the perception of realistic sound samples with transients, using subjective ratings of annoyance, a speech-in-noise test and a noise tolerance test.
STUDY SAMPLE: Participants were 16 experienced cochlear implant recipients wearing an Advanced Bionics Naida Q70 processor.
RESULTS: CI users rated sounds with transients as moderately annoying. Annoyance was slightly, but significantly reduced by TNR. Transients caused a large decrease in speech intelligibility in noise and a moderate decrease in noise tolerance, measured on the Acceptable Noise Level test. The TNR had no significant effect on noise tolerance or on speech intelligibility in noise. The combined application of TNR and CNR did not result in interactions.
CONCLUSIONS: The TNR algorithm was effective in reducing annoyance from transient sounds, but was not able to prevent a decreasing effect of transients on speech understanding in noise and noise tolerance. TNR did not reduce the beneficial effect of CNR on speech intelligibility in noise, but no cumulated improvement was found either.

PMID: 29334269 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rb9Hm3
via IFTTT

The experience of hearing loss: journey through aural rehabilitation.

The experience of hearing loss: journey through aural rehabilitation.

Int J Audiol. 2018 Jan 15;:1

Authors: Lind C

PMID: 29334296 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2mJiSFF
via IFTTT

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Int J Audiol. 2018 Jan 15;:1-6

Authors: Ramkumar V, Vanaja CS, Hall JW, Selvakumar K, Nagarajan R

Abstract
OBJECTIVE: This study assessed the validity of DPOAE screening conducted by village health workers (VHWs) in a rural community. Real-time click evoked tele-auditory brainstem response (tele-ABR) was used as the gold standard to establish validity.
DESIGN: A cross-sectional design was utilised to compare the results of screening by VHWs to those obtained via tele-ABR. Study samples: One hundred and nineteen subjects (0 to 5 years) were selected randomly from a sample of 2880 infants and young children who received DPOAE screening by VHWs.
METHOD: Real time tele-ABR was conducted by using satellite or broadband internet connectivity at the village. An audiologist located at the tertiary care hospital conducted tele-ABR testing through a remote computing paradigm. Tele-ABR was recorded using standard recording parameters recommended for infants and young children. Wave morphology, repeatability and peak latency data were used for ABR analysis.
RESULTS: Tele-ABR and DPOAE findings were compared for 197 ears. The sensitivity of DPOAE screening conducted by the VHW was 75%, and specificity was 91%. The negative and positive predictive values were 98.8% and 27.2%, respectively.
CONCLUSIONS: The validity of DPOAE screening conducted by trained VHW was acceptable. This study supports the engagement of grass-root workers in community-based hearing health care provision.

PMID: 29334277 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2D8AdOD
via IFTTT

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Int J Audiol. 2018 Jan 15;:1-10

Authors: Dingemanse JG, Vroegop JL, Goedegebure A

Abstract
OBJECTIVE: To evaluate the validity and efficacy of a transient noise reduction algorithm (TNR) in cochlear implant processing and the interaction of TNR with a continuous noise reduction algorithm (CNR).
DESIGN: We studied the effects of TNR and CNR on the perception of realistic sound samples with transients, using subjective ratings of annoyance, a speech-in-noise test and a noise tolerance test.
STUDY SAMPLE: Participants were 16 experienced cochlear implant recipients wearing an Advanced Bionics Naida Q70 processor.
RESULTS: CI users rated sounds with transients as moderately annoying. Annoyance was slightly, but significantly reduced by TNR. Transients caused a large decrease in speech intelligibility in noise and a moderate decrease in noise tolerance, measured on the Acceptable Noise Level test. The TNR had no significant effect on noise tolerance or on speech intelligibility in noise. The combined application of TNR and CNR did not result in interactions.
CONCLUSIONS: The TNR algorithm was effective in reducing annoyance from transient sounds, but was not able to prevent a decreasing effect of transients on speech understanding in noise and noise tolerance. TNR did not reduce the beneficial effect of CNR on speech intelligibility in noise, but no cumulated improvement was found either.

PMID: 29334269 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rb9Hm3
via IFTTT

The experience of hearing loss: journey through aural rehabilitation.

The experience of hearing loss: journey through aural rehabilitation.

Int J Audiol. 2018 Jan 15;:1

Authors: Lind C

PMID: 29334296 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2mJiSFF
via IFTTT

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Int J Audiol. 2018 Jan 15;:1-6

Authors: Ramkumar V, Vanaja CS, Hall JW, Selvakumar K, Nagarajan R

Abstract
OBJECTIVE: This study assessed the validity of DPOAE screening conducted by village health workers (VHWs) in a rural community. Real-time click evoked tele-auditory brainstem response (tele-ABR) was used as the gold standard to establish validity.
DESIGN: A cross-sectional design was utilised to compare the results of screening by VHWs to those obtained via tele-ABR. Study samples: One hundred and nineteen subjects (0 to 5 years) were selected randomly from a sample of 2880 infants and young children who received DPOAE screening by VHWs.
METHOD: Real time tele-ABR was conducted by using satellite or broadband internet connectivity at the village. An audiologist located at the tertiary care hospital conducted tele-ABR testing through a remote computing paradigm. Tele-ABR was recorded using standard recording parameters recommended for infants and young children. Wave morphology, repeatability and peak latency data were used for ABR analysis.
RESULTS: Tele-ABR and DPOAE findings were compared for 197 ears. The sensitivity of DPOAE screening conducted by the VHW was 75%, and specificity was 91%. The negative and positive predictive values were 98.8% and 27.2%, respectively.
CONCLUSIONS: The validity of DPOAE screening conducted by trained VHW was acceptable. This study supports the engagement of grass-root workers in community-based hearing health care provision.

PMID: 29334277 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2D8AdOD
via IFTTT

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Int J Audiol. 2018 Jan 15;:1-10

Authors: Dingemanse JG, Vroegop JL, Goedegebure A

Abstract
OBJECTIVE: To evaluate the validity and efficacy of a transient noise reduction algorithm (TNR) in cochlear implant processing and the interaction of TNR with a continuous noise reduction algorithm (CNR).
DESIGN: We studied the effects of TNR and CNR on the perception of realistic sound samples with transients, using subjective ratings of annoyance, a speech-in-noise test and a noise tolerance test.
STUDY SAMPLE: Participants were 16 experienced cochlear implant recipients wearing an Advanced Bionics Naida Q70 processor.
RESULTS: CI users rated sounds with transients as moderately annoying. Annoyance was slightly, but significantly reduced by TNR. Transients caused a large decrease in speech intelligibility in noise and a moderate decrease in noise tolerance, measured on the Acceptable Noise Level test. The TNR had no significant effect on noise tolerance or on speech intelligibility in noise. The combined application of TNR and CNR did not result in interactions.
CONCLUSIONS: The TNR algorithm was effective in reducing annoyance from transient sounds, but was not able to prevent a decreasing effect of transients on speech understanding in noise and noise tolerance. TNR did not reduce the beneficial effect of CNR on speech intelligibility in noise, but no cumulated improvement was found either.

PMID: 29334269 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rb9Hm3
via IFTTT

The experience of hearing loss: journey through aural rehabilitation.

The experience of hearing loss: journey through aural rehabilitation.

Int J Audiol. 2018 Jan 15;:1

Authors: Lind C

PMID: 29334296 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2mJiSFF
via IFTTT

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Validation of DPOAE screening conducted by village health workers in a rural community with real-time click evoked tele-auditory brainstem response.

Int J Audiol. 2018 Jan 15;:1-6

Authors: Ramkumar V, Vanaja CS, Hall JW, Selvakumar K, Nagarajan R

Abstract
OBJECTIVE: This study assessed the validity of DPOAE screening conducted by village health workers (VHWs) in a rural community. Real-time click evoked tele-auditory brainstem response (tele-ABR) was used as the gold standard to establish validity.
DESIGN: A cross-sectional design was utilised to compare the results of screening by VHWs to those obtained via tele-ABR. Study samples: One hundred and nineteen subjects (0 to 5 years) were selected randomly from a sample of 2880 infants and young children who received DPOAE screening by VHWs.
METHOD: Real time tele-ABR was conducted by using satellite or broadband internet connectivity at the village. An audiologist located at the tertiary care hospital conducted tele-ABR testing through a remote computing paradigm. Tele-ABR was recorded using standard recording parameters recommended for infants and young children. Wave morphology, repeatability and peak latency data were used for ABR analysis.
RESULTS: Tele-ABR and DPOAE findings were compared for 197 ears. The sensitivity of DPOAE screening conducted by the VHW was 75%, and specificity was 91%. The negative and positive predictive values were 98.8% and 27.2%, respectively.
CONCLUSIONS: The validity of DPOAE screening conducted by trained VHW was acceptable. This study supports the engagement of grass-root workers in community-based hearing health care provision.

PMID: 29334277 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2D8AdOD
via IFTTT

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Effects of a transient noise reduction algorithm on speech intelligibility in noise, noise tolerance and perceived annoyance in cochlear implant users.

Int J Audiol. 2018 Jan 15;:1-10

Authors: Dingemanse JG, Vroegop JL, Goedegebure A

Abstract
OBJECTIVE: To evaluate the validity and efficacy of a transient noise reduction algorithm (TNR) in cochlear implant processing and the interaction of TNR with a continuous noise reduction algorithm (CNR).
DESIGN: We studied the effects of TNR and CNR on the perception of realistic sound samples with transients, using subjective ratings of annoyance, a speech-in-noise test and a noise tolerance test.
STUDY SAMPLE: Participants were 16 experienced cochlear implant recipients wearing an Advanced Bionics Naida Q70 processor.
RESULTS: CI users rated sounds with transients as moderately annoying. Annoyance was slightly, but significantly reduced by TNR. Transients caused a large decrease in speech intelligibility in noise and a moderate decrease in noise tolerance, measured on the Acceptable Noise Level test. The TNR had no significant effect on noise tolerance or on speech intelligibility in noise. The combined application of TNR and CNR did not result in interactions.
CONCLUSIONS: The TNR algorithm was effective in reducing annoyance from transient sounds, but was not able to prevent a decreasing effect of transients on speech understanding in noise and noise tolerance. TNR did not reduce the beneficial effect of CNR on speech intelligibility in noise, but no cumulated improvement was found either.

PMID: 29334269 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rb9Hm3
via IFTTT

Mutation survey and genotype-phenotype analysis of COL2A1 and COL11A1 genes in 16 Chinese patients with Stickler syndrome.

https:--http://ift.tt/2bsbOVj Related Articles

Mutation survey and genotype-phenotype analysis of COL2A1 and COL11A1 genes in 16 Chinese patients with Stickler syndrome.

Mol Vis. 2016;22:697-704

Authors: Wang X, Jia X, Xiao X, Li S, Li J, Li Y, Wei Y, Liang X, Guo X

Abstract
PURPOSE: To identify mutations in COL2A1 and COL11A1 genes and to examine the genotype-phenotype correlation in a cohort of Chinese patients with Stickler syndrome.
METHODS: A total of 16 Chinese probands with Stickler syndrome were recruited, including nine with a family history of an autosomal dominant pattern and seven sporadic cases. All patients underwent full ocular and systemic examinations. Sanger sequencing was used to analyze all coding and adjacent regions of the COL2A1 and COL11A1 genes. Multiplex ligation-dependent probe amplification was performed to detect the gross indels of COL2A1 and COL11A1. Bioinformatics analysis was performed to evaluate the pathogenicity of the variants.
RESULTS: Five mutations in COL2A1 were identified in six of 16 probands, including three novel (c.85C>T, c.3356delG, c.3401delG) mutations and two known mutations (c.1693C>T, c.2710C>T). Of the five mutations, three were truncated mutations, and the other two were missense mutations. Putative pathogenic mutations of the COL11A1 gene were absent in this cohort of patients. Gross indels were not found in COL2A1 or COL11A1 in any of the probands. High myopia was the most frequent initial ocular phenotype of Stickler syndrome. In this study, 12 Chinese probands lacked obvious systemic phenotypes.
CONCLUSIONS: In this study, three novel and two known mutations in the COL2A1 gene were identified in six of 16 Chinese patients with Stickler syndrome. This is the first study in a cohort of Chinese patients with Stickler syndrome, and the results expand the mutation spectrum of the COL2A1 gene. Analysis of the genotype-phenotype correlation showed that the early onset of high myopia with vitreous abnormalities may serve as a key indicator of Stickler syndrome, while the existence of mandibular protrusion in pediatric patients may be an efficient indicator for the absence of mutations in COL2A1 and COL11A1.

PMID: 27390512 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mLFghJ
via IFTTT