OtoRhinoLaryngology by Sfakianakis G.Alexandros Sfakianakis G.Alexandros,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,tel : 00302841026182,00306932607174
Τετάρτη 19 Οκτωβρίου 2016
Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eTJGRi
via IFTTT
The influence of memory and attention on the ear advantage in dichotic listening
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2enET7R
via IFTTT
Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eTIQ6W
via IFTTT
Category Selectivity of the N170 and the Role of Expertise in Deaf Signers
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2enDdv8
via IFTTT
Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eTGQMl
via IFTTT
Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2em0123
via IFTTT
Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT
The influence of memory and attention on the ear advantage in dichotic listening
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT
Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT
Category Selectivity of the N170 and the Role of Expertise in Deaf Signers
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT
Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT
Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.
from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT
Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT
The influence of memory and attention on the ear advantage in dichotic listening
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT
Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT
Category Selectivity of the N170 and the Role of Expertise in Deaf Signers
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT
Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT
Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.
from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT
Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT
The influence of memory and attention on the ear advantage in dichotic listening
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT
Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT
Category Selectivity of the N170 and the Role of Expertise in Deaf Signers
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT
Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT
Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.
from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT
Spatial and temporal disparity in signals and maskers affects signal detection in non-human primates
Source:Hearing Research
Author(s): Francesca Rocchi, Margit E. Dylla, Peter A. Bohlen, Ramnarayan Ramachandran
Detection thresholds for auditory stimuli (signals) increase in the presence of maskers. Natural environments contain maskers/distractors that can have a wide range of spatiotemporal properties relative to the signal. While these parameters have been well explored psychophysically in humans, they have not been well explored in animal models, and their neuronal underpinnings are not well understood. As a precursor to the neuronal measurements, we report the effects of systematically varying the spatial and temporal relationship between signals and noise in macaque monkeys (Macaca mulatta and Macaca radiata). Macaques detected tones masked by noise in a Go/No-Go task in which the spatiotemporal relationships between the tone and noise were systematically varied. Masked thresholds were higher when the masker was continuous or gated on and off simultaneously with the signal, and lower when the continuous masker was turned off during the signal. A burst of noise caused higher masked thresholds if it completely temporally overlapped with the signal, whereas partial overlap resulted in lower thresholds. Noise durations needed to be at least 100 ms before significant masking could be observed. Thresholds for short duration tones were significantly higher when the onsets of signal and masker coincided compared to when the signal was presented during the steady state portion of the noise (overshoot). When signal and masker were separated in space, masked signal detection thresholds decreased relative to when the masker and signal were co-located (spatial release from masking). Masking release was larger for azimuthal separations than for elevation separations. These results in macaques are similar to those observed in humans, suggesting that the specific spatiotemporal relationship between signal and masker determine threshold in natural environments for macaques in a manner similar to humans. These results form the basis for future investigations of neuronal correlates and mechanisms of masking.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTJGRi
via IFTTT
The influence of memory and attention on the ear advantage in dichotic listening
Source:Hearing Research
Author(s): Anita D’Anselmo, Daniele Marzoli, Alfredo Brancucci
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant–vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enET7R
via IFTTT
Performance in Noise: Impact of Reduced Speech Intelligibility on Sailor Performance in a Navy Command and Control Environment
Source:Hearing Research
Author(s): M. David Keller, John M. Ziriax, William Barns, Benjamin Sheffield, Douglas Brungart, Tony Thomas, Bobby Jaeger, Kurt Yankaskas
Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal-to-noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel.
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTIQ6W
via IFTTT
Category Selectivity of the N170 and the Role of Expertise in Deaf Signers
Source:Hearing Research
Author(s): Teresa V. Mitchell
Deafness is known to affect processing of visual motion and information in the visual periphery, as well as the neural substrates for these domains. This study was designed to characterize the effects of early deafness and lifelong sign language use on visual category sensitivity of the N170 event-related potential. Images from nine categories of visual forms including upright faces, inverted faces, and hands were presented to twelve typically hearing adults and twelve adult congenitally deaf signers. Classic N170 category sensitivity was observed in both participant groups, whereby faces elicited larger amplitudes than all other visual categories, and inverted faces elicited larger amplitudes and slower latencies than upright faces. In hearing adults, hands elicited a right hemispheric asymmetry while in deaf signers this category elicited a left hemispheric asymmetry. Pilot data from five hearing native signers suggests that this effect is due to lifelong use of American Sign Language rather than auditory deprivation itself.
from #Audiology via ola Kala on Inoreader http://ift.tt/2enDdv8
via IFTTT
Editorial Introduction: Special Issue on Plasticity Following Hearing Loss and Deafness
Source:Hearing Research
Author(s): Blake E. Butler, M. Alex Meredith, Stephen G. Lomber
from #Audiology via ola Kala on Inoreader http://ift.tt/2eTGQMl
via IFTTT
Musicians' edge: a comparison of auditory processing, cognitive abilities and statistical learning
Source:Hearing Research
Author(s): Pragati Rao Mandikal Vasuki, Mridula Sharma, Katherine Demuth, Joanne Arciuli
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians’ advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N=17) and non-musicians (N=18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians’ superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli.
from #Audiology via ola Kala on Inoreader http://ift.tt/2em0123
via IFTTT
Hooray for Irina!
4th year JDP Language and Communicative Disorders student, Irina Potapova, presented her research at the UCSD Frontiers of Innovation Scholars Program (FISP) symposium at UC San Diego on October 18th. The FISP symposium is a celebration of awards made for undergraduate, graduate, and post-doctoral research that is interdisciplinary in nature and involving mentors from at least two divisions at UC San Diego. Ms. Potapova was awarded a graduate fellowship to work with Leanne Chukoskie and Jeanne Townsend to use eye tracking as a sensitive online assessment of novel word learning in young children both with and without language disorders.
from #Audiology via ola Kala on Inoreader http://ift.tt/2emUpkw
via IFTTT
Hooray for Irina!
4th year JDP Language and Communicative Disorders student, Irina Potapova, presented her research at the UCSD Frontiers of Innovation Scholars Program (FISP) symposium at UC San Diego on October 18th. The FISP symposium is a celebration of awards made for undergraduate, graduate, and post-doctoral research that is interdisciplinary in nature and involving mentors from at least two divisions at UC San Diego. Ms. Potapova was awarded a graduate fellowship to work with Leanne Chukoskie and Jeanne Townsend to use eye tracking as a sensitive online assessment of novel word learning in young children both with and without language disorders.
from #Audiology via ola Kala on Inoreader http://ift.tt/2emUpkw
via IFTTT
Hooray for Irina!
4th year JDP Language and Communicative Disorders student, Irina Potapova, presented her research at the UCSD Frontiers of Innovation Scholars Program (FISP) symposium at UC San Diego on October 18th. The FISP symposium is a celebration of awards made for undergraduate, graduate, and post-doctoral research that is interdisciplinary in nature and involving mentors from at least two divisions at UC San Diego. Ms. Potapova was awarded a graduate fellowship to work with Leanne Chukoskie and Jeanne Townsend to use eye tracking as a sensitive online assessment of novel word learning in young children both with and without language disorders.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2emUpkw
via IFTTT
Aftereffects of Intense Low-Frequency Sound on Spontaneous Otoacoustic Emissions: Effect of Frequency and Level
Abstract
The presentation of intense, low-frequency (LF) sound to the human ear can cause very slow, sinusoidal oscillations of cochlear sensitivity after LF sound offset, coined the “Bounce” phenomenon. Changes in level and frequency of spontaneous otoacoustic emissions (SOAEs) are a sensitive measure of the Bounce. Here, we investigated the effect of LF sound level and frequency on the Bounce. Specifically, the level of SOAEs was tracked for minutes before and after a 90-s LF sound exposure. Trials were carried out with several LF sound levels (93 to 108 dB SPL corresponding to 47 to 75 phons at a fixed frequency of 30 Hz) and different LF sound frequencies (30, 60, 120, 240 and 480 Hz at a fixed loudness level of 80 phons). At an LF sound frequency of 30 Hz, a minimal sound level of 102 dB SPL (64 phons) was sufficient to elicit a significant Bounce. In some subjects, however, 93 dB SPL (47 phons), the lowest level used, was sufficient to elicit the Bounce phenomenon and actual thresholds could have been even lower. Measurements with different LF sound frequencies showed a mild reduction of the Bounce phenomenon with increasing LF sound frequency. This indicates that the strength of the Bounce not only is a simple function of the spectral separation between SOAE and LF sound frequency but also depends on absolute LF sound frequency, possibly related to the magnitude of the AC component of the outer hair cell receptor potential.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2emLHTp
via IFTTT
Application of Mouse Models to Research in Hearing and Balance.
Application of Mouse Models to Research in Hearing and Balance.
J Assoc Res Otolaryngol. 2016 Oct 17;
Authors: Ohlemiller KK, Jones SM, Johnson KR
Abstract
Laboratory mice (Mus musculus) have become the major model species for inner ear research. The major uses of mice include gene discovery, characterization, and confirmation. Every application of mice is founded on assumptions about what mice represent and how the information gained may be generalized. A host of successes support the continued use of mice to understand hearing and balance. Depending on the research question, however, some mouse models and research designs will be more appropriate than others. Here, we recount some of the history and successes of the use of mice in hearing and vestibular studies and offer guidelines to those considering how to apply mouse models.
PMID: 27752925 [PubMed - as supplied by publisher]
from #Audiology via ola Kala on Inoreader http://ift.tt/2e8AYzP
via IFTTT
Application of Mouse Models to Research in Hearing and Balance.
Application of Mouse Models to Research in Hearing and Balance.
J Assoc Res Otolaryngol. 2016 Oct 17;
Authors: Ohlemiller KK, Jones SM, Johnson KR
Abstract
Laboratory mice (Mus musculus) have become the major model species for inner ear research. The major uses of mice include gene discovery, characterization, and confirmation. Every application of mice is founded on assumptions about what mice represent and how the information gained may be generalized. A host of successes support the continued use of mice to understand hearing and balance. Depending on the research question, however, some mouse models and research designs will be more appropriate than others. Here, we recount some of the history and successes of the use of mice in hearing and vestibular studies and offer guidelines to those considering how to apply mouse models.
PMID: 27752925 [PubMed - as supplied by publisher]
from #Audiology via ola Kala on Inoreader http://ift.tt/2e8AYzP
via IFTTT
Is it Beneficial for Deaf Children to Learn Sign Language?
A researcher at the University of Connecticut, Marie Coppola, recently received a National Science Foundation grant "to study the impact of early language experiences—whether spoken or signed—on how children learn." She hypothesizes that the difference in success is not a matter of whether the language is spoken or signed but rather if the access to any language is early or late.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2eiU82S
via IFTTT
Mandibulofacial Dysostosis with Microcephaly: Mutation and Database Update.
Related Articles |
Mandibulofacial Dysostosis with Microcephaly: Mutation and Database Update.
Hum Mutat. 2016 Feb;37(2):148-54
Authors: Huang L, Vanstone MR, Hartley T, Osmond M, Barrowman N, Allanson J, Baker L, Dabir TA, Dipple KM, Dobyns WB, Estrella J, Faghfoury H, Favaro FP, Goel H, Gregersen PA, Gripp KW, Grix A, Guion-Almeida ML, Harr MH, Hudson C, Hunter AG, Johnson J, Joss SK, Kimball A, Kini U, Kline AD, Lauzon J, Lildballe DL, López-González V, Martinezmoles J, Meldrum C, Mirzaa GM, Morel CF, Morton JE, Pyle LC, Quintero-Rivera F, Richer J, Scheuerle AE, Schönewolf-Greulich B, Shears DJ, Silver J, Smith AC, Temple IK, UCLA Clinical Genomics Center, van de Kamp JM, van Dijk FS, Vandersteen AM, White SM, Zackai EH, Zou R, Care4Rare Canada Consortium, Bulman DE, Boycott KM, Lines MA
Abstract
Mandibulofacial dysostosis with microcephaly (MFDM) is a multiple malformation syndrome comprising microcephaly, craniofacial anomalies, hearing loss, dysmorphic features, and, in some cases, esophageal atresia. Haploinsufficiency of a spliceosomal GTPase, U5-116 kDa/EFTUD2, is responsible. Here, we review the molecular basis of MFDM in the 69 individuals described to date, and report mutations in 38 new individuals, bringing the total number of reported individuals to 107 individuals from 94 kindreds. Pathogenic EFTUD2 variants comprise 76 distinct mutations and seven microdeletions. Among point mutations, missense substitutions are infrequent (14 out of 76; 18%) relative to stop-gain (29 out of 76; 38%), and splicing (33 out of 76; 43%) mutations. Where known, mutation origin was de novo in 48 out of 64 individuals (75%), dominantly inherited in 12 out of 64 (19%), and due to proven germline mosaicism in four out of 64 (6%). Highly penetrant clinical features include, microcephaly, first and second arch craniofacial malformations, and hearing loss; esophageal atresia is present in an estimated ∼27%. Microcephaly is virtually universal in childhood, with some adults exhibiting late "catch-up" growth and normocephaly at maturity. Occasionally reported anomalies, include vestibular and ossicular malformations, reduced mouth opening, atrophy of cerebral white matter, structural brain malformations, and epibulbar dermoid. All reported EFTUD2 mutations can be found in the EFTUD2 mutation database (http://ift.tt/2dnLwtj).
PMID: 26507355 [PubMed - indexed for MEDLINE]
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2e1Dd4h
via IFTTT
Hearing Aid Batteries: The Past, Present, and Future
from #Audiology via ola Kala on Inoreader http://ift.tt/2e0qrD8
via IFTTT
Hearing Aid Batteries: The Past, Present, and Future
from #Audiology via ola Kala on Inoreader http://ift.tt/2e0qrD8
via IFTTT
Hearing Aid Batteries: The Past, Present, and Future
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2e0qrD8
via IFTTT
Monitoring Progression of 12 Cases of Non-Operated Middle Ear Cholesteatoma With Non-Echoplanar Diffusion Weighted Magnetic Resonance Imaging: Our Experience.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePTjAp
via IFTTT
Evaluation of Rigid Cochlear Models for Measuring Cochlear Implant Electrode Position.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek6qHq
via IFTTT
Diagnostic Criteria for Detection of Vestibular Schwannomas in the VA Population.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePQG1D
via IFTTT
Posterior Fossa Spontaneous Cerebrospinal Fluid Leaks.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ek8oaQ
via IFTTT
Cochlear Implantation in the Elderly: Does Age Matter?.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ePS58y
via IFTTT