Σάββατο 14 Μαΐου 2016

Wireless and acoustic hearing with bone-anchored hearing devices

10.1080/14992027.2016.1177209<br/>Arjan J. Bosman

from #Audiology via ola Kala on Inoreader http://ift.tt/1Tedstc
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults

10.3109/14992027.2016.1157269<br/>Alexandra Rousset

from #Audiology via ola Kala on Inoreader http://ift.tt/1TOHiUt
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices

10.1080/14992027.2016.1177209<br/>Arjan J. Bosman

from #Audiology via ola Kala on Inoreader http://ift.tt/1Tedstc
via IFTTT

Receptive language as a predictor of cochlear implant outcome for prelingually deaf adults

10.3109/14992027.2016.1157269<br/>Alexandra Rousset

from #Audiology via ola Kala on Inoreader http://ift.tt/1TOHiUt
via IFTTT

Aging effects on the Binaural Interaction Component of the Auditory Brainstem Response in the Mongolian Gerbil: Effects of Interaural Time and Level Differences

Publication date: Available online 10 May 2016
Source:Hearing Research
Author(s): Geneviève Laumen, Daniel J. Tollin, Rainer Beutelmann, Georg M. Klump
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WZxjB7
via IFTTT

Horizontal Sound Localization in Cochlear Implant Users with a Contralateral Hearing Aid

Publication date: Available online 10 May 2016
Source:Hearing Research
Author(s): Lidwien C.E. Veugen, Maartje M.E. Hendrikse, Marc M. van Wanrooij, Martijn J.H. Agterberg, Josef Chalupper, Lucas H.M. Mens, Ad F.M. Snik, A. John van Opstal
Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA).Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli.The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds.. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction.We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader http://ift.tt/1ZD4f1m
via IFTTT

EEG activity evoked in preparation for multi-talker listening by adults and children

Publication date: Available online 10 May 2016
Source:Hearing Research
Author(s): Emma Holmes, Padraig T. Kitterick, A. Quentin Summerfield
Selective attention is critical for successful speech perception because speech is often encountered in the presence of other sounds, including the voices of competing talkers. Faced with the need to attend selectively, listeners perceive speech more accurately when they know characteristics of upcoming talkers before they begin to speak. However, the neural processes that underlie the preparation of selective attention for voices are not fully understood. The current experiments used electroencephalography (EEG) to investigate the time course of brain activity during preparation for an upcoming talker in young adults aged 18-27 years with normal hearing (Experiments 1 and 2) and in typically-developing children aged 7-13 years (Experiment 3). Participants reported key words spoken by a target talker when an opposite-gender distractor talker spoke simultaneously. The two talkers were presented from different spatial locations (±30° azimuth). Before the talkers began to speak, a visual cue indicated either the location (left/right) or the gender (male/female) of the target talker. Adults evoked preparatory EEG activity that started shortly after (<50 ms) the visual cue was presented and was sustained until the talkers began to speak. The location cue evoked similar preparatory activity in Experiments 1 and 2 with different samples of participants. The gender cue did not evoke preparatory activity when it predicted gender only (Experiment 1) but did evoke preparatory activity when it predicted the identity of a specific talker with greater certainty (Experiment 2). Location cues evoked significant preparatory EEG activity in children but gender cues did not. The results provide converging evidence that listeners evoke consistent preparatory brain activity for selecting a talker by their location (regardless of their gender or identity), but not by their gender alone.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WZxjB5
via IFTTT

Aging effects on the Binaural Interaction Component of the Auditory Brainstem Response in the Mongolian Gerbil: Effects of Interaural Time and Level Differences

Publication date: Available online 10 May 2016
Source:Hearing Research
Author(s): Geneviève Laumen, Daniel J. Tollin, Rainer Beutelmann, Georg M. Klump
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WZxjB7
via IFTTT

Horizontal Sound Localization in Cochlear Implant Users with a Contralateral Hearing Aid

Publication date: Available online 10 May 2016
Source:Hearing Research
Author(s): Lidwien C.E. Veugen, Maartje M.E. Hendrikse, Marc M. van Wanrooij, Martijn J.H. Agterberg, Josef Chalupper, Lucas H.M. Mens, Ad F.M. Snik, A. John van Opstal
Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA).Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli.The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds.. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction.We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader http://ift.tt/1ZD4f1m
via IFTTT

EEG activity evoked in preparation for multi-talker listening by adults and children

Publication date: Available online 10 May 2016
Source:Hearing Research
Author(s): Emma Holmes, Padraig T. Kitterick, A. Quentin Summerfield
Selective attention is critical for successful speech perception because speech is often encountered in the presence of other sounds, including the voices of competing talkers. Faced with the need to attend selectively, listeners perceive speech more accurately when they know characteristics of upcoming talkers before they begin to speak. However, the neural processes that underlie the preparation of selective attention for voices are not fully understood. The current experiments used electroencephalography (EEG) to investigate the time course of brain activity during preparation for an upcoming talker in young adults aged 18-27 years with normal hearing (Experiments 1 and 2) and in typically-developing children aged 7-13 years (Experiment 3). Participants reported key words spoken by a target talker when an opposite-gender distractor talker spoke simultaneously. The two talkers were presented from different spatial locations (±30° azimuth). Before the talkers began to speak, a visual cue indicated either the location (left/right) or the gender (male/female) of the target talker. Adults evoked preparatory EEG activity that started shortly after (<50 ms) the visual cue was presented and was sustained until the talkers began to speak. The location cue evoked similar preparatory activity in Experiments 1 and 2 with different samples of participants. The gender cue did not evoke preparatory activity when it predicted gender only (Experiment 1) but did evoke preparatory activity when it predicted the identity of a specific talker with greater certainty (Experiment 2). Location cues evoked significant preparatory EEG activity in children but gender cues did not. The results provide converging evidence that listeners evoke consistent preparatory brain activity for selecting a talker by their location (regardless of their gender or identity), but not by their gender alone.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WZxjB5
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices.

Wireless and acoustic hearing with bone-anchored hearing devices.

Int J Audiol. 2016 May 13;:1-6

Authors: Bosman AJ, Mylanus EA, Hol MK, Snik AF

Abstract
OBJECTIVE: The efficacy of wireless connectivity in bone-anchored hearing was studied by comparing the wireless and acoustic performance of the Ponto Plus sound processor from Oticon Medical relative to the acoustic performance of its predecessor, the Ponto Pro.
STUDY SAMPLE: Nineteen subjects with more than two years' experience with a bone-anchored hearing device were included. Thirteen subjects were fitted unilaterally and six bilaterally.
DESIGN: Subjects served as their own control. First, subjects were tested with the Ponto Pro processor. After a four-week acclimatization period performance the Ponto Plus processor was measured. In the laboratory wireless and acoustic input levels were made equal. In daily life equal settings of wireless and acoustic input were used when watching TV, however when using the telephone the acoustic input was reduced by 9 dB relative to the wireless input.
RESULTS: Speech scores for microphone with Ponto Pro and for both input modes of the Ponto Plus processor were essentially equal when equal input levels of wireless and microphone inputs were used. Only the TV-condition showed a statistically significant (p <5%) lower speech reception threshold for wireless relative to microphone input. In real life, evaluation of speech quality, speech intelligibility in quiet and noise, and annoyance by ambient noise, when using landline phone, mobile telephone, and watching TV showed a clear preference (p <1%) for the Ponto Plus system with streamer over the microphone input. Due to the small number of respondents with landline phone (N = 7) the result for noise annoyance was only significant at the 5% level.
CONCLUSION: Equal input levels for acoustic and wireless inputs results in equal speech scores, showing a (near) equivalence for acoustic and wireless sound transmission with Ponto Pro and Ponto Plus. The default 9-dB difference between microphone and wireless input when using the telephone results in a substantial wireless benefit when using the telephone. The preference of wirelessly transmitted audio when watching TV can be attributed to the relatively poor sound quality of backward facing loudspeakers in flat screen TVs. The ratio of wireless and acoustic input can be easily set to the user's preference with the streamer's volume control.

PMID: 27176657 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/27m22xb
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices.

Wireless and acoustic hearing with bone-anchored hearing devices.

Int J Audiol. 2016 May 13;:1-6

Authors: Bosman AJ, Mylanus EA, Hol MK, Snik AF

Abstract
OBJECTIVE: The efficacy of wireless connectivity in bone-anchored hearing was studied by comparing the wireless and acoustic performance of the Ponto Plus sound processor from Oticon Medical relative to the acoustic performance of its predecessor, the Ponto Pro.
STUDY SAMPLE: Nineteen subjects with more than two years' experience with a bone-anchored hearing device were included. Thirteen subjects were fitted unilaterally and six bilaterally.
DESIGN: Subjects served as their own control. First, subjects were tested with the Ponto Pro processor. After a four-week acclimatization period performance the Ponto Plus processor was measured. In the laboratory wireless and acoustic input levels were made equal. In daily life equal settings of wireless and acoustic input were used when watching TV, however when using the telephone the acoustic input was reduced by 9 dB relative to the wireless input.
RESULTS: Speech scores for microphone with Ponto Pro and for both input modes of the Ponto Plus processor were essentially equal when equal input levels of wireless and microphone inputs were used. Only the TV-condition showed a statistically significant (p <5%) lower speech reception threshold for wireless relative to microphone input. In real life, evaluation of speech quality, speech intelligibility in quiet and noise, and annoyance by ambient noise, when using landline phone, mobile telephone, and watching TV showed a clear preference (p <1%) for the Ponto Plus system with streamer over the microphone input. Due to the small number of respondents with landline phone (N = 7) the result for noise annoyance was only significant at the 5% level.
CONCLUSION: Equal input levels for acoustic and wireless inputs results in equal speech scores, showing a (near) equivalence for acoustic and wireless sound transmission with Ponto Pro and Ponto Plus. The default 9-dB difference between microphone and wireless input when using the telephone results in a substantial wireless benefit when using the telephone. The preference of wirelessly transmitted audio when watching TV can be attributed to the relatively poor sound quality of backward facing loudspeakers in flat screen TVs. The ratio of wireless and acoustic input can be easily set to the user's preference with the streamer's volume control.

PMID: 27176657 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/27m22xb
via IFTTT

Sensory Impairment, Functional Balance and Physical Activity with All-Cause Mortality.

Sensory Impairment, Functional Balance and Physical Activity with All-Cause Mortality.

J Phys Act Health. 2016 May 11;

Authors: Loprinzi PD, Crush E

Abstract
OBJECTIVE: No study has comprehensively examined the independent and combined effects of sensory impairment, physical activity and balance on mortality risk, which was this study's purpose.
METHODS: Data from the population-based 2003-2004 National Health and Nutrition Examination Survey was used, with follow-up through 2011. Physical activity was assessed via accelerometry. Balance was assessed via the Romberg test. Peripheral neuropathy was assessed objectively using a standard monofilament. Visual impairment was objectively assessed using an autorefractor. Hearing impairment was assessed via self-report. A 5-level index variable (higher score is worse) was calculated based on the participant's degree of sensory impairment, dysfunctional balance and physical inactivity.
RESULTS: Among the 1,658 participants (40-85 yrs), 228 died during the median follow-up period of 92 months. Hearing (HR=1.18; P=0.40), vision (HR=1.17; P=0.58) and peripheral neuropathy (HR=1.06; P=0.71) were not independently associated with all-cause mortality, but physical activity (HR=0.97; P=0.01) and functional balance (HR=0.59; P=0.03) were. Compared to those with an index score of 0, the Hazard Ratios (95% CI) for those with an index score of 1-3, respectively, were 1.20 (0.46-3.13), 2.63 (1.08-6.40) and 2.88 (1.36-6.06).
CONCLUSIONS: Physical activity and functional balance are independent contributors to survival.

PMID: 27172618 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/1TMOBMs
via IFTTT

Meniere's disease.

Meniere's disease.

Nat Rev Dis Primers. 2016;2:16028

Authors: Nakashima T, Pyykkö I, Arroll MA, Casselbrant ML, Foster CA, Manzoor NF, Megerian CA, Naganawa S, Young YH

Abstract
Meniere's disease (MD) is a disorder of the inner ear that causes vertigo attacks, fluctuating hearing loss, tinnitus and aural fullness. The aetiology of MD is multifactorial. A characteristic sign of MD is endolymphatic hydrops (EH), a disorder in which excessive endolymph accumulates in the inner ear and causes damage to the ganglion cells. In most patients, the clinical symptoms of MD present after considerable accumulation of endolymph has occurred. However, some patients develop symptoms in the early stages of EH. The reason for the variability in the symptomatology is unknown and the relationship between EH and the clinical symptoms of MD requires further study. The diagnosis of MD is based on clinical symptoms but can be complemented with functional inner ear tests, including audiometry, vestibular-evoked myogenic potential testing, caloric testing, electrocochleography or head impulse tests. MRI has been optimized to directly visualize EH in the cochlea, vestibule and semicircular canals, and its use is shifting from the research setting to the clinic. The management of MD is mainly aimed at the relief of acute attacks of vertigo and the prevention of recurrent attacks. Therapeutic options are based on empirical evidence and include the management of risk factors and a conservative approach as the first line of treatment. When medical treatment is unable to suppress vertigo attacks, intratympanic gentamicin therapy or endolymphatic sac decompression surgery is usually considered. This Primer covers the pathophysiology, symptomatology, diagnosis, management, quality of life and prevention of MD.

PMID: 27170253 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/24Zczj1
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices.

Wireless and acoustic hearing with bone-anchored hearing devices.

Int J Audiol. 2016 May 13;:1-6

Authors: Bosman AJ, Mylanus EA, Hol MK, Snik AF

Abstract
OBJECTIVE: The efficacy of wireless connectivity in bone-anchored hearing was studied by comparing the wireless and acoustic performance of the Ponto Plus sound processor from Oticon Medical relative to the acoustic performance of its predecessor, the Ponto Pro.
STUDY SAMPLE: Nineteen subjects with more than two years' experience with a bone-anchored hearing device were included. Thirteen subjects were fitted unilaterally and six bilaterally.
DESIGN: Subjects served as their own control. First, subjects were tested with the Ponto Pro processor. After a four-week acclimatization period performance the Ponto Plus processor was measured. In the laboratory wireless and acoustic input levels were made equal. In daily life equal settings of wireless and acoustic input were used when watching TV, however when using the telephone the acoustic input was reduced by 9 dB relative to the wireless input.
RESULTS: Speech scores for microphone with Ponto Pro and for both input modes of the Ponto Plus processor were essentially equal when equal input levels of wireless and microphone inputs were used. Only the TV-condition showed a statistically significant (p <5%) lower speech reception threshold for wireless relative to microphone input. In real life, evaluation of speech quality, speech intelligibility in quiet and noise, and annoyance by ambient noise, when using landline phone, mobile telephone, and watching TV showed a clear preference (p <1%) for the Ponto Plus system with streamer over the microphone input. Due to the small number of respondents with landline phone (N = 7) the result for noise annoyance was only significant at the 5% level.
CONCLUSION: Equal input levels for acoustic and wireless inputs results in equal speech scores, showing a (near) equivalence for acoustic and wireless sound transmission with Ponto Pro and Ponto Plus. The default 9-dB difference between microphone and wireless input when using the telephone results in a substantial wireless benefit when using the telephone. The preference of wirelessly transmitted audio when watching TV can be attributed to the relatively poor sound quality of backward facing loudspeakers in flat screen TVs. The ratio of wireless and acoustic input can be easily set to the user's preference with the streamer's volume control.

PMID: 27176657 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/27m22xb
via IFTTT

Sensory Impairment, Functional Balance and Physical Activity with All-Cause Mortality.

Sensory Impairment, Functional Balance and Physical Activity with All-Cause Mortality.

J Phys Act Health. 2016 May 11;

Authors: Loprinzi PD, Crush E

Abstract
OBJECTIVE: No study has comprehensively examined the independent and combined effects of sensory impairment, physical activity and balance on mortality risk, which was this study's purpose.
METHODS: Data from the population-based 2003-2004 National Health and Nutrition Examination Survey was used, with follow-up through 2011. Physical activity was assessed via accelerometry. Balance was assessed via the Romberg test. Peripheral neuropathy was assessed objectively using a standard monofilament. Visual impairment was objectively assessed using an autorefractor. Hearing impairment was assessed via self-report. A 5-level index variable (higher score is worse) was calculated based on the participant's degree of sensory impairment, dysfunctional balance and physical inactivity.
RESULTS: Among the 1,658 participants (40-85 yrs), 228 died during the median follow-up period of 92 months. Hearing (HR=1.18; P=0.40), vision (HR=1.17; P=0.58) and peripheral neuropathy (HR=1.06; P=0.71) were not independently associated with all-cause mortality, but physical activity (HR=0.97; P=0.01) and functional balance (HR=0.59; P=0.03) were. Compared to those with an index score of 0, the Hazard Ratios (95% CI) for those with an index score of 1-3, respectively, were 1.20 (0.46-3.13), 2.63 (1.08-6.40) and 2.88 (1.36-6.06).
CONCLUSIONS: Physical activity and functional balance are independent contributors to survival.

PMID: 27172618 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/1TMOBMs
via IFTTT

Meniere's disease.

Meniere's disease.

Nat Rev Dis Primers. 2016;2:16028

Authors: Nakashima T, Pyykkö I, Arroll MA, Casselbrant ML, Foster CA, Manzoor NF, Megerian CA, Naganawa S, Young YH

Abstract
Meniere's disease (MD) is a disorder of the inner ear that causes vertigo attacks, fluctuating hearing loss, tinnitus and aural fullness. The aetiology of MD is multifactorial. A characteristic sign of MD is endolymphatic hydrops (EH), a disorder in which excessive endolymph accumulates in the inner ear and causes damage to the ganglion cells. In most patients, the clinical symptoms of MD present after considerable accumulation of endolymph has occurred. However, some patients develop symptoms in the early stages of EH. The reason for the variability in the symptomatology is unknown and the relationship between EH and the clinical symptoms of MD requires further study. The diagnosis of MD is based on clinical symptoms but can be complemented with functional inner ear tests, including audiometry, vestibular-evoked myogenic potential testing, caloric testing, electrocochleography or head impulse tests. MRI has been optimized to directly visualize EH in the cochlea, vestibule and semicircular canals, and its use is shifting from the research setting to the clinic. The management of MD is mainly aimed at the relief of acute attacks of vertigo and the prevention of recurrent attacks. Therapeutic options are based on empirical evidence and include the management of risk factors and a conservative approach as the first line of treatment. When medical treatment is unable to suppress vertigo attacks, intratympanic gentamicin therapy or endolymphatic sac decompression surgery is usually considered. This Primer covers the pathophysiology, symptomatology, diagnosis, management, quality of life and prevention of MD.

PMID: 27170253 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/24Zczj1
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices.

Wireless and acoustic hearing with bone-anchored hearing devices.

Int J Audiol. 2016 May 13;:1-6

Authors: Bosman AJ, Mylanus EA, Hol MK, Snik AF

Abstract
OBJECTIVE: The efficacy of wireless connectivity in bone-anchored hearing was studied by comparing the wireless and acoustic performance of the Ponto Plus sound processor from Oticon Medical relative to the acoustic performance of its predecessor, the Ponto Pro.
STUDY SAMPLE: Nineteen subjects with more than two years' experience with a bone-anchored hearing device were included. Thirteen subjects were fitted unilaterally and six bilaterally.
DESIGN: Subjects served as their own control. First, subjects were tested with the Ponto Pro processor. After a four-week acclimatization period performance the Ponto Plus processor was measured. In the laboratory wireless and acoustic input levels were made equal. In daily life equal settings of wireless and acoustic input were used when watching TV, however when using the telephone the acoustic input was reduced by 9 dB relative to the wireless input.
RESULTS: Speech scores for microphone with Ponto Pro and for both input modes of the Ponto Plus processor were essentially equal when equal input levels of wireless and microphone inputs were used. Only the TV-condition showed a statistically significant (p <5%) lower speech reception threshold for wireless relative to microphone input. In real life, evaluation of speech quality, speech intelligibility in quiet and noise, and annoyance by ambient noise, when using landline phone, mobile telephone, and watching TV showed a clear preference (p <1%) for the Ponto Plus system with streamer over the microphone input. Due to the small number of respondents with landline phone (N = 7) the result for noise annoyance was only significant at the 5% level.
CONCLUSION: Equal input levels for acoustic and wireless inputs results in equal speech scores, showing a (near) equivalence for acoustic and wireless sound transmission with Ponto Pro and Ponto Plus. The default 9-dB difference between microphone and wireless input when using the telephone results in a substantial wireless benefit when using the telephone. The preference of wirelessly transmitted audio when watching TV can be attributed to the relatively poor sound quality of backward facing loudspeakers in flat screen TVs. The ratio of wireless and acoustic input can be easily set to the user's preference with the streamer's volume control.

PMID: 27176657 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/27m22xb
via IFTTT

Genotypes and phenotypes of a family with a deaf child carrying combined heterozygous mutations in SLC26A4 and GJB3 genes.

Genotypes and phenotypes of a family with a deaf child carrying combined heterozygous mutations in SLC26A4 and GJB3 genes.

Mol Med Rep. 2016 May 13;

Authors: Li Y, Zhu B

Abstract
Mutations in the SLC26A4 gene have been shown to cause a type of deafness referred to as large vestibular aquduct syndrome (LVAS), whereas mutations in the GJB3 gene have been associated with nonsyndromic deafness. However, the clinical phenotypes of these mutations vary and remain to be fully elucidated. The present study performed genetic analysis of a Chinese family, in which the child was deaf and the parents were healthy. Sanger sequencing demonstrated that the affected individual harbored three heterogeneous mutations in the SLC26A4 and GJB3 genes, as follows: SLC26A4 IVS-2 A>G, SLC26A4 c.2168 A>G and GJB3 c.538 C>T. The affected individual exhibited hearing loss and was diagnosed with LVAS by computed tomography scan. The mother and father of the affected individual harbored the heterogeneous mutations of SLC26A4 IVS-2 A>G and GJB3 c.538 C>T, and the heterozygous mutation of SLC26A4 c.2168 A>G, respectively. Neither parents exhibited any hearing loss. The results obtained from the deaf patient provided genetic and clinical evidence that carrying combined heterogeneous mutations in the GJB3 and SLC26A4 genes may be involved in the etiology of severe hearing loss, of which the mechanism requires further examination.

PMID: 27176802 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZQ6yhG
via IFTTT

Hearing Loss Health Care for Older Adults.

Hearing Loss Health Care for Older Adults.

J Am Board Fam Med. 2016 May-Jun;29(3):394-403

Authors: Contrera KJ, Wallhagen MI, Mamo SK, Oh ES, Lin FR

Abstract
Hearing deficits are highly prevalent among older adults and are associated with declines in cognitive, physical, and mental health. However, hearing loss in the geriatric population often goes untreated and generally receives little clinical emphasis in primary care practice. This article reviews hearing health care for older adults, focusing on what is most relevant for family physicians. The objective of hearing loss treatment is to ensure that a patient can communicate effectively in all settings. We present the 5 major obstacles to obtaining effective hearing and rehabilitative care: awareness, access, treatment options, cost, and device effectiveness. Hearing technologies are discussed, along with recommendations on when it is appropriate to screen, refer, or counsel a patient. The purpose of this article is to provide pragmatic recommendations for the clinical management of the older adult with hearing loss that can be conducted in family medicine practices.

PMID: 27170797 [PubMed - in process]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1X9F9bv
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices

10.1080/14992027.2016.1177209<br/>Arjan J. Bosman

from #Audiology via ola Kala on Inoreader http://ift.tt/1ZPUZao
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices

10.1080/14992027.2016.1177209<br/>Arjan J. Bosman

from #Audiology via ola Kala on Inoreader http://ift.tt/1ZPUZao
via IFTTT

Wireless and acoustic hearing with bone-anchored hearing devices

10.1080/14992027.2016.1177209<br/>Arjan J. Bosman

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZPUZao
via IFTTT

Double-Blind Sham-Controlled Crossover Trial of Repetitive Transcranial Magnetic Stimulation for Mal de Debarquement Syndrome.

Objective: To determine whether the chronic rocking dizziness that occurs in Mal de Debarquement Syndrome (MdDS) can be suppressed with repetitive transcranial magnetic stimulation (rTMS) beyond the treatment period. Methods: We performed a prospective randomized double-blind sham controlled crossover trial of 5-days of rTMS utilizing high frequency (10 Hz) stimulation over the left dorsolateral prefrontal cortex (DLPFC). Results: Eight right-handed women (44.5 [SD 7.0] yr) with classical motion-triggered MdDS (mean duration 42.1 [SD 13.2] mo) participated. Group level mixed effects repeated measures analysis of variance (ANOVA) showed improvement in our primary outcome measure, the Dizziness Handicap Inventory (DHI) at Post TMS Weeks 1, 3, and 4 (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TcxQuP
via IFTTT