Τετάρτη 19 Οκτωβρίου 2016

Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eTJGRi
via IFTTT

The influence of memory and attention on the ear advantage in dichotic listening

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2enET7R
via IFTTT

Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eTIQ6W
via IFTTT

Category Selectivity of the N170 and the Role of Expertise in Deaf Signers

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2enDdv8
via IFTTT

Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness

alertIcon.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber




from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eTGQMl
via IFTTT

Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2em0123
via IFTTT

Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT

The influence of memory and attention on the ear advantage in dichotic listening

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT

Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT

Category Selectivity of the N170 and the Role of Expertise in Deaf Signers

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT

Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness

alertIcon.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber




from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT

Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.



from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT

Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT

The influence of memory and attention on the ear advantage in dichotic listening

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT

Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT

Category Selectivity of the N170 and the Role of Expertise in Deaf Signers

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT

Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness

alertIcon.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber




from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT

Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning

S03785955.gif

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.



from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT

Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT

The influence of memory and attention on the ear advantage in dichotic listening

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT

Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT

Category Selectivity of the N170 and the Role of Expertise in Deaf Signers

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT

Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber




from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT

Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.



from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT

Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT

The influence of memory and attention on the ear advantage in dichotic listening

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT

Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.



from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT

Category Selectivity of the N170 and the Role of Expertise in Deaf Signers

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.



from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT

Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber




from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT

Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning

Publication date: Available online 19 October 2016
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.



from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT

Hooray for Irina!

Irina Potapova4th year JDP Language and Communicative Disorders student, Irina Potapova, presented her research at the UCSD Frontiers of Innovation Scholars Program (FISP) symposium at UC San Diego on October 18th.  The FISP symposium is a celebration of awards made for undergraduate, graduate, and post-doctoral research that is interdisciplinary in nature and involving mentors from at least two divisions at UC San Diego.  Ms. Potapova was awarded a graduate fellowship to work with Leanne Chukoskie and Jeanne Townsend to use eye tracking as a sensitive online assessment of novel word learning in young children both with and without language disorders.



from #Audiology via ola Kala on Inoreader http://ift.tt/2emUpkw
via IFTTT

Hooray for Irina!

Irina Potapova4th year JDP Language and Communicative Disorders student, Irina Potapova, presented her research at the UCSD Frontiers of Innovation Scholars Program (FISP) symposium at UC San Diego on October 18th.  The FISP symposium is a celebration of awards made for undergraduate, graduate, and post-doctoral research that is interdisciplinary in nature and involving mentors from at least two divisions at UC San Diego.  Ms. Potapova was awarded a graduate fellowship to work with Leanne Chukoskie and Jeanne Townsend to use eye tracking as a sensitive online assessment of novel word learning in young children both with and without language disorders.



from #Audiology via ola Kala on Inoreader http://ift.tt/2emUpkw
via IFTTT

Hooray for Irina!

Irina Potapova4th year JDP Language and Communicative Disorders student, Irina Potapova, presented her research at the UCSD Frontiers of Innovation Scholars Program (FISP) symposium at UC San Diego on October 18th.  The FISP symposium is a celebration of awards made for undergraduate, graduate, and post-doctoral research that is interdisciplinary in nature and involving mentors from at least two divisions at UC San Diego.  Ms. Potapova was awarded a graduate fellowship to work with Leanne Chukoskie and Jeanne Townsend to use eye tracking as a sensitive online assessment of novel word learning in young children both with and without language disorders.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2emUpkw
via IFTTT

Aftereffects of Intense Low-Frequency Sound on Spontaneous Otoacoustic Emissions: Effect of Frequency and Level

Abstract

The presentation of intense, low-frequency (LF) sound to the human ear can cause very slow, sinusoidal oscillations of cochlear sensitivity after LF sound offset, coined the “Bounce” phenomenon. Changes in level and frequency of spontaneous otoacoustic emissions (SOAEs) are a sensitive measure of the Bounce. Here, we investigated the effect of LF sound level and frequency on the Bounce. Specifically, the level of SOAEs was tracked for minutes before and after a 90-s LF sound exposure. Trials were carried out with several LF sound levels (93 to 108 dB SPL corresponding to 47 to 75 phons at a fixed frequency of 30 Hz) and different LF sound frequencies (30, 60, 120, 240 and 480 Hz at a fixed loudness level of 80 phons). At an LF sound frequency of 30 Hz, a minimal sound level of 102 dB SPL (64 phons) was sufficient to elicit a significant Bounce. In some subjects, however, 93 dB SPL (47 phons), the lowest level used, was sufficient to elicit the Bounce phenomenon and actual thresholds could have been even lower. Measurements with different LF sound frequencies showed a mild reduction of the Bounce phenomenon with increasing LF sound frequency. This indicates that the strength of the Bounce not only is a simple function of the spectral separation between SOAE and LF sound frequency but also depends on absolute LF sound frequency, possibly related to the magnitude of the AC component of the outer hair cell receptor potential.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2emLHTp
via IFTTT

Application of Mouse Models to Research in Hearing and Balance.

Application of Mouse Models to Research in Hearing and Balance.

J Assoc Res Otolaryngol. 2016 Oct 17;

Authors: Ohlemiller KK, Jones SM, Johnson KR

Abstract
Laboratory mice (Mus musculus) have become the major model species for inner ear research. The major uses of mice include gene discovery, characterization, and confirmation. Every application of mice is founded on assumptions about what mice represent and how the information gained may be generalized. A host of successes support the continued use of mice to understand hearing and balance. Depending on the research question, however, some mouse models and research designs will be more appropriate than others. Here, we recount some of the history and successes of the use of mice in hearing and vestibular studies and offer guidelines to those considering how to apply mouse models.

PMID: 27752925 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2e8AYzP
via IFTTT

Application of Mouse Models to Research in Hearing and Balance.

Application of Mouse Models to Research in Hearing and Balance.

J Assoc Res Otolaryngol. 2016 Oct 17;

Authors: Ohlemiller KK, Jones SM, Johnson KR

Abstract
Laboratory mice (Mus musculus) have become the major model species for inner ear research. The major uses of mice include gene discovery, characterization, and confirmation. Every application of mice is founded on assumptions about what mice represent and how the information gained may be generalized. A host of successes support the continued use of mice to understand hearing and balance. Depending on the research question, however, some mouse models and research designs will be more appropriate than others. Here, we recount some of the history and successes of the use of mice in hearing and vestibular studies and offer guidelines to those considering how to apply mouse models.

PMID: 27752925 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2e8AYzP
via IFTTT

Is it Beneficial for Deaf Children to Learn Sign Language?

A researcher at the University of Connecticut, Marie Coppola, recently received a  National Science Foundation grant  "to study the impact of early language experiences—whether spoken or signed—on how children learn." She hypothesizes that the difference in success is not a matter of whether the language is spoken or signed but rather if the access to any language is early or late.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eiU82S
via IFTTT

Mandibulofacial Dysostosis with Microcephaly: Mutation and Database Update.

http:--media.wiley.com-assets-7315-19-Wi Related Articles

Mandibulofacial Dysostosis with Microcephaly: Mutation and Database Update.

Hum Mutat. 2016 Feb;37(2):148-54

Authors: Huang L, Vanstone MR, Hartley T, Osmond M, Barrowman N, Allanson J, Baker L, Dabir TA, Dipple KM, Dobyns WB, Estrella J, Faghfoury H, Favaro FP, Goel H, Gregersen PA, Gripp KW, Grix A, Guion-Almeida ML, Harr MH, Hudson C, Hunter AG, Johnson J, Joss SK, Kimball A, Kini U, Kline AD, Lauzon J, Lildballe DL, López-González V, Martinezmoles J, Meldrum C, Mirzaa GM, Morel CF, Morton JE, Pyle LC, Quintero-Rivera F, Richer J, Scheuerle AE, Schönewolf-Greulich B, Shears DJ, Silver J, Smith AC, Temple IK, UCLA Clinical Genomics Center, van de Kamp JM, van Dijk FS, Vandersteen AM, White SM, Zackai EH, Zou R, Care4Rare Canada Consortium, Bulman DE, Boycott KM, Lines MA

Abstract
Mandibulofacial dysostosis with microcephaly (MFDM) is a multiple malformation syndrome comprising microcephaly, craniofacial anomalies, hearing loss, dysmorphic features, and, in some cases, esophageal atresia. Haploinsufficiency of a spliceosomal GTPase, U5-116 kDa/EFTUD2, is responsible. Here, we review the molecular basis of MFDM in the 69 individuals described to date, and report mutations in 38 new individuals, bringing the total number of reported individuals to 107 individuals from 94 kindreds. Pathogenic EFTUD2 variants comprise 76 distinct mutations and seven microdeletions. Among point mutations, missense substitutions are infrequent (14 out of 76; 18%) relative to stop-gain (29 out of 76; 38%), and splicing (33 out of 76; 43%) mutations. Where known, mutation origin was de novo in 48 out of 64 individuals (75%), dominantly inherited in 12 out of 64 (19%), and due to proven germline mosaicism in four out of 64 (6%). Highly penetrant clinical features include, microcephaly, first and second arch craniofacial malformations, and hearing loss; esophageal atresia is present in an estimated ∼27%. Microcephaly is virtually universal in childhood, with some adults exhibiting late "catch-up" growth and normocephaly at maturity. Occasionally reported anomalies, include vestibular and ossicular malformations, reduced mouth opening, atrophy of cerebral white matter, structural brain malformations, and epibulbar dermoid. All reported EFTUD2 mutations can be found in the EFTUD2 mutation database (http://ift.tt/2dnLwtj).

PMID: 26507355 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2e1Dd4h
via IFTTT

Hearing Aid Batteries: The Past, Present, and Future

Modern digital hearing aids offer a huge variety of form factors, features, and wireless connectivity options that allow for individual hearing solutions. However, the price of functions like situation-based real-time processing, binaural algorithms, or streaming is an increased demand on battery performance. So far, the topic of efficient powering has received only scant attention, but this may change soon due to the many good reasons for using rechargeable batteries.

from #Audiology via ola Kala on Inoreader http://ift.tt/2e0qrD8
via IFTTT

Hearing Aid Batteries: The Past, Present, and Future

Modern digital hearing aids offer a huge variety of form factors, features, and wireless connectivity options that allow for individual hearing solutions. However, the price of functions like situation-based real-time processing, binaural algorithms, or streaming is an increased demand on battery performance. So far, the topic of efficient powering has received only scant attention, but this may change soon due to the many good reasons for using rechargeable batteries.

from #Audiology via ola Kala on Inoreader http://ift.tt/2e0qrD8
via IFTTT

Hearing Aid Batteries: The Past, Present, and Future

Modern digital hearing aids offer a huge variety of form factors, features, and wireless connectivity options that allow for individual hearing solutions. However, the price of functions like situation-based real-time processing, binaural algorithms, or streaming is an increased demand on battery performance. So far, the topic of efficient powering has received only scant attention, but this may change soon due to the many good reasons for using rechargeable batteries.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2e0qrD8
via IFTTT

Monitoring Progression of 12 Cases of Non-Operated Middle Ear Cholesteatoma With Non-Echoplanar Diffusion Weighted Magnetic Resonance Imaging: Our Experience.

Aim: The aim of this study is to gain insight into the disease progression and behavior of primary cholesteatoma in a cohort of patients who did not have surgery using non-echoplanar diffusion-weighted magnetic resonance imaging (DW MRI) serial monitoring. Methods: Retrospective longitudinal observational study of 12 cases of middle ear cleft cholesteatoma diagnosed between 2009 and 2014 where surgery was not performed for various reasons. All cases were monitored radiologically with non-echoplanar half-Fourier acquisition single-shot turbo spin-echo diffusion weighted imaging annually for a median period of 23 months (between 11 and 45 mo) to evaluate for changes in disease volume and direction of growth. Results: Of the 12 cases, there was one outlier where the cholesteatoma growth was disproportionately high compared with the rest of the cases outside the standard deviation range. A third of the cases had radiological evidence of cholesteatoma growth. The mean growth was about 11.9% of the initial disease volume per year. Seven out of the 12 cases had radiological evidence of cholesteatoma regression in terms of size, with three cases having negative follow-up DW-MRI scans as early as 17 months. The mean regression rate was much higher than the mean growth rate at 54.3% of the initial disease volume per year. The direction of greatest growth is craniocaudally. Conclusion: Within the limits of our longitudinal study, we have shown that by monitoring with non-echoplanar diffusion weighted imaging, cholesteatoma can progress or regress when left untreated by surgery. The greatest progression was recorded in the craniocaudal direction. Copyright (C) 2016 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePTjAp
via IFTTT

Evaluation of Rigid Cochlear Models for Measuring Cochlear Implant Electrode Position.

Objective: To investigate the accuracy of rigid cochlear models in measuring intra-cochlear positions of cochlear implant (CI) electrodes. Patients: Ninety three adults who had undergone CI and pre- and postoperative computed tomographic (CT) imaging. Main Outcome Measures: Seven rigid models of cochlear anatomy were constructed using micro-CTs of cochlear specimens. Using each of the seven models, the position of each electrode in each of the 98 ears in our dataset was measured as its depth along the length of the cochlea, its distance to the basilar membrane, and its distance to the modiolus. Cochlear duct length was also measured using each model. Results: Standard deviation (SD) across rigid cochlear models in measures of electrode depth, distance to basilar membrane, distance to modiolus, and length of the cochlear duct at two turns were 0.68, 0.11, 0.15, and 1.54 mm. Comparing the estimated position of the electrodes with respect to the basilar membrane, i.e., deciding whether an electrode was located within the scala tympani (ST) or the scala vestibuli (SV), there was not a unanimous agreement between the models for 19% of all the electrodes. With respect to the modiolus, each electrode was classified into one of the three groups depending on its modiolar distance: close, medium, and far. Rigid models did not unanimously agree on modiolar distance for approximately 50% of the electrodes tested. Conclusions: Inter-model variance of rigid cochlear models exists, demonstrating that measurements made using rigid cochlear models are limited in terms of accuracy because of non-rigid inter-subject variations in cochlear anatomy. Copyright (C) 2016 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek6qHq
via IFTTT

Bilateral Facial Paralysis as Presenting Symptoms in Acute Lymphoblastic Leukemia.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePTBaE
via IFTTT

An Unusual Case of Lymphoepithelioma-Like Carcinoma of the External Auditory Canal.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek9iEi
via IFTTT

Diagnostic Criteria for Detection of Vestibular Schwannomas in the VA Population.

Objective: To investigate the prevalence of vestibular schwannoma (VS) and asymmetric sensorineural hearing loss in the Veterans Administration hospital population and analyze a more efficient method of diagnosing VS in a population with significant noise exposure. Study Design: Retrospective review of South Central (VISN 16) Veterans Administration hospitals. Methods: Record query for ICD-9 codes for asymmetric sensorineural hearing loss or VS between 1999 and 2012. Patient demographics, signs and symptoms at presentation, audiogram and imaging data, and management data were collected and analyzed. Audiograms from tumor patients were compared with controls matched for age, sex, combat experience, and medical comorbidity (2:1 control to case ratio). Results: The prevalence of VS was 1 per 1,145 patients in this population, with average age at diagnosis of 62. Patients with VS presented more commonly with unilateral tinnitus, rollover, and absent acoustic reflexes when compared with matched controls, but positive predictive value was low. Published criteria for defining hearing asymmetry showed variable sensitivity (51-89%) and low specificity (0-42%) for the detection of VS in this population. Criteria meeting the definitions of significant asymmetry with specificity for VS of 80% or greater were as follows: >15 dB threshold difference at 3 kHz and unilateral tinnitus, >=45 dB threshold difference at 3 kHz regardless of tinnitus, or when the word recognition score difference was >=80%. With serial audiograms 2.5 years apart or greater, a >=10 dB threshold increase at any frequency between 0.5 and 4 kHz had a 100% sensitivity for tumor and a >=10 dB increase at 3 kHz had a specificity of 84%. The majority of patients were observed, whereas only 30% had surgery. Patients who were observed were older than those treated with surgery or radiation (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePQG1D
via IFTTT

Posterior Fossa Spontaneous Cerebrospinal Fluid Leaks.

Objective: Describe the diagnosis and management of spontaneous lateral skull base cerebrospinal fluid (CSF) leaks that originate from the posterior fossa. Study Design: Retrospective case review. Setting: Tertiary university hospital. Patients: Adult patients from 2005 to 2015 who underwent surgical repair of a spontaneous lateral skull base CSF leak with intraoperative confirmation of a posterior fossa leak source. Intervention: Surgical repair. Main Outcome Measures: CSF leak resolution. Results: Five patients had CSF leaks from the posterior fossa. The mean age at presentation was 54 years old (range, 19-79), the mean body mass index (BMI) was 32.6 (standard deviation [SD], 8.4), and the mean follow-up length was 34.6 months (SD, 19.4). Presentations did not differ from CSF leaks through middle fossa defects, including three patients with a history of meningitis and all patients with clear otorrhea following tympanostomy tube placement. All patients had resolution of the leak after surgical repair, but two patients required revision surgery for persistent leaks and one patient had a postoperative infection. Surgical approaches included one middle fossa, two transmastoid, one combined middle fossa/transmastoid, and one transcanal. Radiographic studies suggested a posterior fossa source in all cases but findings were often subtle. Conclusion: Posterior fossa CSF leaks represent a rare subset of spontaneous lateral skull base leaks. Diligent radiographic review and intraoperative assessment of the posterior fossa plate are crucial. The size and location of the defect dictates the optimal surgical approach. Surgeons should consider a posterior fossa source in failed repairs or when the initial surgery did not fully evaluate the posterior fossa plate. Copyright (C) 2016 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek8oaQ
via IFTTT

Cochlear Implantation in the Elderly: Does Age Matter?.

Objective: To compare the outcome of hearing rehabilitation in younger versus older adult cochlear implant recipients. Analysis of surgical and postoperative complications, as well as the number of auditory therapy sessions in the two age groups. Study Design: Individual retrospective cohort study. Methods: A cohort of 145 postlingually deafened adults was evaluated in this study. The patients were divided into two age groups based on the age at implantation: Group I, 18 to 69 years; and Group II, 70 and older. Postoperative hearing performance was measured based on the German Freiburg monosyllabic word test (FM) and the Oldenburg sentence test (OLSA). Results: Postoperative hearing evaluation results in both groups plateaued and remained constant after 12 months of implantation. The results remained constant at the 2 and 3-year time intervals. There was a significant difference in complications arising after cochlear implantation. Group II showed more cases of vertigo and dysgeusia. The number of auditory therapy sessions in both groups was similar. Conclusion: Cochlear implantation in the elderly is highly effective; the postoperative hearing performance is at the same level as younger adult recipients. Complex hearing tasks, such as hearing in background noise, requires an equally long time for comprehension. The recovery period of vestibular dysfunction after surgery may be longer in the elderly. Auditory therapy rehabilitation is not more time consuming in the elderly compared with the younger counterparts. Copyright (C) 2016 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePS58y
via IFTTT

COMPARISON OF FGF-2, FLOX, AND GELFOAM PATCHING FOR TRAUMATIC TYMPANIC MEMBRANE PERFORATION.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek6V4e
via IFTTT

TRAUMATIC TYMPANIC MEMBRANE PERFORATION REPAIR USING GELFOAM, OFLOXACIN DROPS, AND FGF-2.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePS9F5
via IFTTT

The Stapes Bar: An Unusual Cause of Conductive Hearing Loss With Normal Tympanic Membrane.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek9Zxt
via IFTTT