Πέμπτη 10 Μαΐου 2018

Increased Spontaneous Firing Rates in Auditory Midbrain Following Noise Exposure Are Specifically Abolished by a Kv3 Channel Modulator

S03785955.gif

Publication date: Available online 30 April 2018
Source:Hearing Research
Author(s): Lucy A. Anderson, Lara L. Hesse, Nadia Pilati, Warren M.H. Bakay, Giuseppe Alvaro, Charles H. Large, David McAlpine, Roland Schaette, Jennifer F. Linden
Noise exposure has been shown to produce long-lasting increases in spontaneous activity in central auditory structures in animal models, and similar pathologies are thought to contribute to clinical phenomena such as hyperacusis or tinnitus in humans. Here we demonstrate that multi-unit spontaneous neuronal activity in the inferior colliculus (IC) of mice is significantly elevated four weeks following noise exposure at recording sites with frequency tuning within or near the noise exposure band, and this selective central auditory pathology can be normalised through administration of a novel compound that modulates activity of Kv3 voltage-gated ion channels. The compound had no statistically significant effect on IC spontaneous activity without noise exposure, nor on thresholds or frequency tuning of tone-evoked responses either with or without noise exposure. Administration of the compound produced some reduction in the magnitude of evoked responses to a broadband noise, but unlike effects on spontaneous rates, these effects on evoked responses were not specific to recording sites with frequency tuning within the noise exposure band. Thus, the results suggest that modulators of Kv3 channels can selectively counteract increases in spontaneous activity in the auditory midbrain associated with noise exposure.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I3Prux
via IFTTT

A “voice patch” system in the primate brain for processing vocal information?

S03785955.gif

Publication date: Available online 7 May 2018
Source:Hearing Research
Author(s): Pascal Belin, Clémentine Bodin, Virginia Aglieri
We review behavioural and neural evidence for the processing of information contained in conspecific vocalizations (CVs) in three primate species: humans, macaques and marmosets. We focus on abilities that are present and ecologically relevant in all three species: the detection and sensitivity to CVs; and the processing of identity cues in CVs. Current evidence, although fragmentary, supports the notion of a “voice patch system” in the primate brain analogous to the face patch system of visual cortex: a series of discrete, interconnected cortical areas supporting increasingly abstract representations of the vocal input. A central question concerns the degree to which the voice patch system is conserved in evolution. We outline challenges that arise and suggesting potential avenues for comparing the organization of the voice patch system across primate brains.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8aQM
via IFTTT

Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Jörg Lewald, Michael-Christian Schlüter, Stephan Getzmann
Neural mechanisms of selectively attending to a sound source of interest in a simulated “cocktail-party” situation, composed of multiple competing sources, were investigated using event-related potentials in combination with a spatial oddball design. Subjects either detected rare spatial deviants in a series of standard sounds or passively listened. Targets either appeared in isolation or in the presence of two distractor sound sources at different locations (“cocktail-party” condition). Deviant-minus-standard difference potentials revealed mismatch negativity, P3a, and P3b. However, mainly the P3b was modulated by spatial conditions of stimulation, with lower amplitude for “cocktail-party”, than single, sounds. In the active condition, cortical source localization revealed two distinct foci of maximum differences in electrical activity for the contrast of single vs. “cocktail-party” sounds: the right inferior frontal junction and the right anterior superior parietal lobule. These areas may be specifically involved in processes associated with selective attention in a “cocktail-party” situation.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader https://ift.tt/2I6iZYv
via IFTTT

Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response

S03785955.gif

Publication date: Available online 6 May 2018
Source:Hearing Research
Author(s): Barbara Ohlenforst, Dorothea Wendt, Sophia E. Kramer, Graham Naylor, Adriana A. Zekveld, Thomas Lunner
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, and 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to ‘giving up’ and ‘easy listening’, respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic83Vm
via IFTTT

Characterizing a novel vGlut3-P2A-iCreER knockin mouse strain in cochlea

elsevier-non-solus.png

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Chao Li, Yilai Shu, Guangqin Wang, He Zhang, Ying Lu, Xiang Li, Gen Li, Lei Song, Zhiyong Liu
Precise mouse genetic studies rely on specific tools that can label specific cell types. In mouse cochlea, previous studies suggest that vesicular glutamate transporter 3 (vGlut3), also known as Slc17a8, is specifically expressed in inner hair cells (IHCs) and loss of vGlut3 causes deafness. To take advantage of its unique expression pattern, here we generate a novel vGlut3-P2A-iCreER knockin mouse strain. The P2A-iCreER cassette is precisely inserted before stop codon of vGlut3, by which the endogenous vGlut3 is intact and paired with iCreER as well. Approximately, 10.7%, 85.6% and 41.8% of IHCs are tdtomato + when tamoxifen is given to vGlut3-P2A-iCreER/+; Rosa26-LSL-tdtomato/+ reporter strain at P2/P3, P10/P11 and P30/P31, respectively. Tdtomato + OHCs are never observed. Interestingly, besides IHCs, glia cells, but not spiral ganglion neurons (SGNs), are tdtomato+, which is further evidenced by the presence of Sox10+/tdtomato+ and tdtomato+/Prox1(Gata3 or Tuj1)-negative cells in SGN region. We further independently validate vGlut3 expression in SGN region by vGlut3 in situ hybridization and antibody staining. Moreover, total number of tdtomato + glia cells decreased gradually when tamoxifen is given from P2/P3 to P30/P31. Taken together, vGlut3-P2A-iCreER is an efficient genetic tool to specifically target IHCs for gene manipulation, which is complimentary to Prestin-CreER strain exclusively labelling cochlear outer hair cells (OHCs).



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOnWqw
via IFTTT

How aging impacts the encoding of binaural cues and the perception of auditory space

S03785955.gif

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): Ann Clock Eddins, Erol J. Ozmeral, David A. Eddins
Over the years, the effect of aging on auditory function has been investigated in animal models and humans in an effort to characterize age-related changes in both perception and physiology. Here, we review how aging may impact neural encoding and processing of binaural and spatial cues in human listeners with a focus on recent work by the authors as well as others. Age-related declines in monaural temporal processing, as estimated from measures of gap detection and temporal fine structure discrimination, have been associated with poorer performance on binaural tasks that require precise temporal processing. In lateralization and localization tasks, as well as in the detection of signals in noise, marked age-related changes have been demonstrated in both behavioral and electrophysiological measures and have been attributed to declines in neural synchrony and reduced central inhibition with advancing age. Evidence for such mechanisms, however, are influenced by the task (passive vs. attending) and the stimulus paradigm (e.g., static vs. continuous with dynamic change). That is, cortical auditory evoked potentials (CAEP) measured in response to static interaural time differences (ITDs) are larger in older versus younger listeners, consistent with reduced inhibition, while continuous stimuli with dynamic ITD changes lead to smaller responses in older compared to younger adults, suggestive of poorer neural synchrony. Additionally, the distribution of cortical activity is broader and less asymmetric in older than younger adults, consistent with the hemispheric asymmetry reduction in older adults model of cognitive aging. When older listeners attend to selected target locations in the free field, their CAEP components (N1, P2, P3) are again consistently smaller relative to younger listeners, and the reduced asymmetry in the distribution of cortical activity is maintained. As this research matures, proper neural biomarkers for changes in spatial hearing can provide objective evidence of impairment and targets for remediation. Future research should focus on the development and evaluation of effective approaches for remediating these spatial processing deficits associated with aging and hearing loss.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IaGtYz
via IFTTT

Animal model studies yield translational solutions for cochlear drug delivery

S03785955.gif

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): R.D. Frisina, M. Budzevich, X. Zhu, G.V. Martinez, J.P. Walton, D.A. Borkholder
The field of hearing and deafness research is about to enter an era where new cochlear drug delivery methodologies will become more innovative and plentiful. The present report provides a representative review of previous studies where efficacious results have been obtained with animal models, primarily rodents, for protection against acute hearing loss such as acoustic trauma due to noise overexposure, antibiotic use and cancer chemotherapies. These approaches were initiated using systemic injections or oral administrations of otoprotectants. Now, exciting new options for local drug delivery, which opens up the possibilities for utilization of novel otoprotective drugs or compounds that might not be suitable for systemic use, or might interfere with the efficacious actions of chemotherapeutic agents or antibiotics, are being developed. These include interesting use of nanoparticles (with or without magnetic field supplementation), hydrogels, cochlear micropumps, and new transtympanic injectable compounds, sometimes in combination with cochlear implants.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I4JaPo
via IFTTT

Eyes and ears: using eye tracking and pupillometry to understand challenges to speech recognition

S03785955.gif

Publication date: Available online 4 May 2018
Source:Hearing Research
Author(s): Kristin J. Van Engen, Drew J. McLaughlin
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Iel9BG
via IFTTT

Bone morphogenetic protein 4 antagonizes hair cell regeneration in the avian auditory epithelium

S03785955.gif

Publication date: Available online 2 May 2018
Source:Hearing Research
Author(s): Rebecca M. Lewis, Jesse J. Keller, Liangcai Wan, Jennifer S. Stone
Permanent hearing loss is often a result of damage to cochlear hair cells, which mammals are unable to regenerate. Non-mammalian vertebrates such as birds replace damaged hair cells and restore hearing function, but mechanisms controlling regeneration are not understood. The secreted protein bone morphogenetic protein 4 (BMP4) regulates inner ear morphogenesis and hair cell development. To investigate mechanisms controlling hair cell regeneration in birds, we examined expression and function of BMP4 in the auditory epithelia (basilar papillae) of chickens of either sex after hair cell destruction by ototoxic antibiotics. In mature basilar papillae, BMP4 mRNA is highly expressed in hair cells, but not in hair cell progenitors (supporting cells). Supporting cells transcribe genes encoding receptors for BMP4 (BMPR1A, BMPR1B, and BMPR2) and effectors of BMP4 signaling (ID transcription factors). Following hair cell destruction, BMP4 transcripts are lost from the sensory epithelium. Using organotypic cultures, we demonstrate that treatments with BMP4 during hair cell destruction prevent supporting cells from upregulating expression of the pro-hair cell transcription factor ATOH1, entering the cell cycle, and fully transdifferentiating into hair cells, but they do not induce cell death. By contrast, noggin, a BMP4 inhibitor, increases numbers of regenerated hair cells. These findings demonstrate that BMP4 antagonizes hair cell regeneration in the chicken basilar papilla, at least in part by preventing accumulation of ATOH1 in hair cell precursors.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I7Nptu
via IFTTT

Editorial Board

elsevier-non-solus.png

Publication date: May 2018
Source:Hearing Research, Volume 362





from #Audiology via ola Kala on Inoreader https://ift.tt/2I9KRa9
via IFTTT

Sound-localization performance of patients with single-sided deafness is not improved when listening with a bone-conduction device

Publication date: Available online 19 April 2018
Source:Hearing Research
Author(s): Martijn J.H. Agterberg, Ad F.M. Snik, Rens M.G. Van de Goor, Myrthe K.S. Hol, A. John Van Opstal
An increased number of treatment options has become available for patients with single sided deafness (SSD), who are seeking hearing rehabilitation. For example, bone-conduction devices that employ contralateral routing of sound (CROS), by transmitting acoustic bone vibrations from the deaf side to the cochlea of the hearing ear, are widely used. However, in some countries, cochlear implantation is becoming the standard treatment. The present study investigated whether CROS intervention, by means of a CROS bone-conduction device (C-BCD), affected sound-localization performance of patients with SSD. Several studies have reported unexpected moderate to good unilateral sound-localization abilities in unaided SSD listeners. Listening with a C-BCD might deteriorate these localization abilities because sounds are transmitted, through bone conduction to the contralateral normal hearing ear, and could thus interfere with monaural level cues (i.e. ambiguous monaural head-shadow cues), or with the subtle spectral localization cues, on which the listener has learned to rely on. The present study included nineteen SSD patients who were using their C-BCD for more than five months. To assess the use of the different localization cues, we investigated their localization abilities to broadband (BB, 0.5–20 kHz), low-pass (LP, 0.5–1.5 kHz), and high-pass filtered noises (HP, 3–20 kHz) of varying intensities. Experiments were performed in complete darkness, by measuring orienting head-movement responses under open-loop localization conditions. We demonstrate that a minority of listeners with SSD (5 out of 19) could localize BB and HP (but not LP) sounds in the horizontal plane in the unaided condition, and that a C-BCD did not deteriorate their localization abilities.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jOocWw
via IFTTT

Editorial Board

alertIcon.gif

Publication date: June 2018
Source:Hearing Research, Volume 363





from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ic8leU
via IFTTT

Spatial hearing ability of the pigmented guinea pig (Cavia porcellus): minimum audible angle and spatial release from masking in azimuth

S03785955.gif

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Nathanial T. Greene, Kelsey L. Anbuhl, Alexander T. Ferber, Marisa DeGuzman, Paul D. Allen, Daniel J. Tollin
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the “prepulse”) along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker “swap” paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed a reduction in startle amplitude (i.e., greater PPI) when the masker was presented at speaker locations near the chirp signal. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jNxvpt
via IFTTT

Towards an objective test of chronic tinnitus: Properties of auditory cortical potentials evoked by silent gaps in tinnitus-like sounds

elsevier-non-solus.png

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Brandon T. Paul, Marc Schoenwiesner, Sylvie Hébert
A common method designed to identify if an animal hears tinnitus assumes that tinnitus “fills-in” silent gaps in background sound. This phenomenon has not been reliably demonstrated in humans. One test of the gap-filling hypothesis would be to determine if gap-evoked cortical potentials are absent or attenuated when measured within background sound matched to the tinnitus sensation. However the tinnitus sensation is usually of low intensity and of high frequency, and it is unknown if cortical responses can be measured with such “weak” stimulus properties. Therefore the aim of the present study was to test the plausibility of observing these responses in the EEG in humans without tinnitus. Twelve non-tinnitus participants heard narrowband noises centered at sound frequencies of 5 or 10 kHz at sensation levels of either 5, 15, or 30 dB. Silent gaps of 20 ms duration were randomly inserted into noise stimuli, and cortical potentials evoked by these gaps were measured by 64-channel EEG. Gap-evoked cortical responses were statistically identifiable in all conditions for all but one participant. Responses were not significantly different between noise frequencies or levels. Results suggest that cortical responses can be measured when evoked by gaps in sounds that mirror acoustic properties of tinnitus. This design can validate the animal model and be used as a tinnitus diagnosis test in humans.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2IelioI
via IFTTT

Why Does Language Not Emerge Until the Second Year?

S03785955.gif

Publication date: Available online 9 May 2018
Source:Hearing Research
Author(s): Rhodri Cusack, Conor J. Wild, Leire Zubiaurre-Elorza, Annika C. Linke
From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N=6) and 9 months (N=7), and in an adult comparison group (N=15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ielfcw
via IFTTT

Increased Spontaneous Firing Rates in Auditory Midbrain Following Noise Exposure Are Specifically Abolished by a Kv3 Channel Modulator

S03785955.gif

Publication date: Available online 30 April 2018
Source:Hearing Research
Author(s): Lucy A. Anderson, Lara L. Hesse, Nadia Pilati, Warren M.H. Bakay, Giuseppe Alvaro, Charles H. Large, David McAlpine, Roland Schaette, Jennifer F. Linden
Noise exposure has been shown to produce long-lasting increases in spontaneous activity in central auditory structures in animal models, and similar pathologies are thought to contribute to clinical phenomena such as hyperacusis or tinnitus in humans. Here we demonstrate that multi-unit spontaneous neuronal activity in the inferior colliculus (IC) of mice is significantly elevated four weeks following noise exposure at recording sites with frequency tuning within or near the noise exposure band, and this selective central auditory pathology can be normalised through administration of a novel compound that modulates activity of Kv3 voltage-gated ion channels. The compound had no statistically significant effect on IC spontaneous activity without noise exposure, nor on thresholds or frequency tuning of tone-evoked responses either with or without noise exposure. Administration of the compound produced some reduction in the magnitude of evoked responses to a broadband noise, but unlike effects on spontaneous rates, these effects on evoked responses were not specific to recording sites with frequency tuning within the noise exposure band. Thus, the results suggest that modulators of Kv3 channels can selectively counteract increases in spontaneous activity in the auditory midbrain associated with noise exposure.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2I3Prux
via IFTTT

A “voice patch” system in the primate brain for processing vocal information?

S03785955.gif

Publication date: Available online 7 May 2018
Source:Hearing Research
Author(s): Pascal Belin, Clémentine Bodin, Virginia Aglieri
We review behavioural and neural evidence for the processing of information contained in conspecific vocalizations (CVs) in three primate species: humans, macaques and marmosets. We focus on abilities that are present and ecologically relevant in all three species: the detection and sensitivity to CVs; and the processing of identity cues in CVs. Current evidence, although fragmentary, supports the notion of a “voice patch system” in the primate brain analogous to the face patch system of visual cortex: a series of discrete, interconnected cortical areas supporting increasingly abstract representations of the vocal input. A central question concerns the degree to which the voice patch system is conserved in evolution. We outline challenges that arise and suggesting potential avenues for comparing the organization of the voice patch system across primate brains.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ic8aQM
via IFTTT

Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Jörg Lewald, Michael-Christian Schlüter, Stephan Getzmann
Neural mechanisms of selectively attending to a sound source of interest in a simulated “cocktail-party” situation, composed of multiple competing sources, were investigated using event-related potentials in combination with a spatial oddball design. Subjects either detected rare spatial deviants in a series of standard sounds or passively listened. Targets either appeared in isolation or in the presence of two distractor sound sources at different locations (“cocktail-party” condition). Deviant-minus-standard difference potentials revealed mismatch negativity, P3a, and P3b. However, mainly the P3b was modulated by spatial conditions of stimulation, with lower amplitude for “cocktail-party”, than single, sounds. In the active condition, cortical source localization revealed two distinct foci of maximum differences in electrical activity for the contrast of single vs. “cocktail-party” sounds: the right inferior frontal junction and the right anterior superior parietal lobule. These areas may be specifically involved in processes associated with selective attention in a “cocktail-party” situation.

Graphical abstract

image


from #Audiology via xlomafota13 on Inoreader https://ift.tt/2I6iZYv
via IFTTT

Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response

S03785955.gif

Publication date: Available online 6 May 2018
Source:Hearing Research
Author(s): Barbara Ohlenforst, Dorothea Wendt, Sophia E. Kramer, Graham Naylor, Adriana A. Zekveld, Thomas Lunner
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, and 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to ‘giving up’ and ‘easy listening’, respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ic83Vm
via IFTTT

Characterizing a novel vGlut3-P2A-iCreER knockin mouse strain in cochlea

elsevier-non-solus.png

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Chao Li, Yilai Shu, Guangqin Wang, He Zhang, Ying Lu, Xiang Li, Gen Li, Lei Song, Zhiyong Liu
Precise mouse genetic studies rely on specific tools that can label specific cell types. In mouse cochlea, previous studies suggest that vesicular glutamate transporter 3 (vGlut3), also known as Slc17a8, is specifically expressed in inner hair cells (IHCs) and loss of vGlut3 causes deafness. To take advantage of its unique expression pattern, here we generate a novel vGlut3-P2A-iCreER knockin mouse strain. The P2A-iCreER cassette is precisely inserted before stop codon of vGlut3, by which the endogenous vGlut3 is intact and paired with iCreER as well. Approximately, 10.7%, 85.6% and 41.8% of IHCs are tdtomato + when tamoxifen is given to vGlut3-P2A-iCreER/+; Rosa26-LSL-tdtomato/+ reporter strain at P2/P3, P10/P11 and P30/P31, respectively. Tdtomato + OHCs are never observed. Interestingly, besides IHCs, glia cells, but not spiral ganglion neurons (SGNs), are tdtomato+, which is further evidenced by the presence of Sox10+/tdtomato+ and tdtomato+/Prox1(Gata3 or Tuj1)-negative cells in SGN region. We further independently validate vGlut3 expression in SGN region by vGlut3 in situ hybridization and antibody staining. Moreover, total number of tdtomato + glia cells decreased gradually when tamoxifen is given from P2/P3 to P30/P31. Taken together, vGlut3-P2A-iCreER is an efficient genetic tool to specifically target IHCs for gene manipulation, which is complimentary to Prestin-CreER strain exclusively labelling cochlear outer hair cells (OHCs).



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jOnWqw
via IFTTT

How aging impacts the encoding of binaural cues and the perception of auditory space

S03785955.gif

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): Ann Clock Eddins, Erol J. Ozmeral, David A. Eddins
Over the years, the effect of aging on auditory function has been investigated in animal models and humans in an effort to characterize age-related changes in both perception and physiology. Here, we review how aging may impact neural encoding and processing of binaural and spatial cues in human listeners with a focus on recent work by the authors as well as others. Age-related declines in monaural temporal processing, as estimated from measures of gap detection and temporal fine structure discrimination, have been associated with poorer performance on binaural tasks that require precise temporal processing. In lateralization and localization tasks, as well as in the detection of signals in noise, marked age-related changes have been demonstrated in both behavioral and electrophysiological measures and have been attributed to declines in neural synchrony and reduced central inhibition with advancing age. Evidence for such mechanisms, however, are influenced by the task (passive vs. attending) and the stimulus paradigm (e.g., static vs. continuous with dynamic change). That is, cortical auditory evoked potentials (CAEP) measured in response to static interaural time differences (ITDs) are larger in older versus younger listeners, consistent with reduced inhibition, while continuous stimuli with dynamic ITD changes lead to smaller responses in older compared to younger adults, suggestive of poorer neural synchrony. Additionally, the distribution of cortical activity is broader and less asymmetric in older than younger adults, consistent with the hemispheric asymmetry reduction in older adults model of cognitive aging. When older listeners attend to selected target locations in the free field, their CAEP components (N1, P2, P3) are again consistently smaller relative to younger listeners, and the reduced asymmetry in the distribution of cortical activity is maintained. As this research matures, proper neural biomarkers for changes in spatial hearing can provide objective evidence of impairment and targets for remediation. Future research should focus on the development and evaluation of effective approaches for remediating these spatial processing deficits associated with aging and hearing loss.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2IaGtYz
via IFTTT

Animal model studies yield translational solutions for cochlear drug delivery

S03785955.gif

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): R.D. Frisina, M. Budzevich, X. Zhu, G.V. Martinez, J.P. Walton, D.A. Borkholder
The field of hearing and deafness research is about to enter an era where new cochlear drug delivery methodologies will become more innovative and plentiful. The present report provides a representative review of previous studies where efficacious results have been obtained with animal models, primarily rodents, for protection against acute hearing loss such as acoustic trauma due to noise overexposure, antibiotic use and cancer chemotherapies. These approaches were initiated using systemic injections or oral administrations of otoprotectants. Now, exciting new options for local drug delivery, which opens up the possibilities for utilization of novel otoprotective drugs or compounds that might not be suitable for systemic use, or might interfere with the efficacious actions of chemotherapeutic agents or antibiotics, are being developed. These include interesting use of nanoparticles (with or without magnetic field supplementation), hydrogels, cochlear micropumps, and new transtympanic injectable compounds, sometimes in combination with cochlear implants.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2I4JaPo
via IFTTT

Eyes and ears: using eye tracking and pupillometry to understand challenges to speech recognition

S03785955.gif

Publication date: Available online 4 May 2018
Source:Hearing Research
Author(s): Kristin J. Van Engen, Drew J. McLaughlin
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Iel9BG
via IFTTT

Bone morphogenetic protein 4 antagonizes hair cell regeneration in the avian auditory epithelium

S03785955.gif

Publication date: Available online 2 May 2018
Source:Hearing Research
Author(s): Rebecca M. Lewis, Jesse J. Keller, Liangcai Wan, Jennifer S. Stone
Permanent hearing loss is often a result of damage to cochlear hair cells, which mammals are unable to regenerate. Non-mammalian vertebrates such as birds replace damaged hair cells and restore hearing function, but mechanisms controlling regeneration are not understood. The secreted protein bone morphogenetic protein 4 (BMP4) regulates inner ear morphogenesis and hair cell development. To investigate mechanisms controlling hair cell regeneration in birds, we examined expression and function of BMP4 in the auditory epithelia (basilar papillae) of chickens of either sex after hair cell destruction by ototoxic antibiotics. In mature basilar papillae, BMP4 mRNA is highly expressed in hair cells, but not in hair cell progenitors (supporting cells). Supporting cells transcribe genes encoding receptors for BMP4 (BMPR1A, BMPR1B, and BMPR2) and effectors of BMP4 signaling (ID transcription factors). Following hair cell destruction, BMP4 transcripts are lost from the sensory epithelium. Using organotypic cultures, we demonstrate that treatments with BMP4 during hair cell destruction prevent supporting cells from upregulating expression of the pro-hair cell transcription factor ATOH1, entering the cell cycle, and fully transdifferentiating into hair cells, but they do not induce cell death. By contrast, noggin, a BMP4 inhibitor, increases numbers of regenerated hair cells. These findings demonstrate that BMP4 antagonizes hair cell regeneration in the chicken basilar papilla, at least in part by preventing accumulation of ATOH1 in hair cell precursors.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2I7Nptu
via IFTTT

Editorial Board

elsevier-non-solus.png

Publication date: May 2018
Source:Hearing Research, Volume 362





from #Audiology via xlomafota13 on Inoreader https://ift.tt/2I9KRa9
via IFTTT

Sound-localization performance of patients with single-sided deafness is not improved when listening with a bone-conduction device

Publication date: Available online 19 April 2018
Source:Hearing Research
Author(s): Martijn J.H. Agterberg, Ad F.M. Snik, Rens M.G. Van de Goor, Myrthe K.S. Hol, A. John Van Opstal
An increased number of treatment options has become available for patients with single sided deafness (SSD), who are seeking hearing rehabilitation. For example, bone-conduction devices that employ contralateral routing of sound (CROS), by transmitting acoustic bone vibrations from the deaf side to the cochlea of the hearing ear, are widely used. However, in some countries, cochlear implantation is becoming the standard treatment. The present study investigated whether CROS intervention, by means of a CROS bone-conduction device (C-BCD), affected sound-localization performance of patients with SSD. Several studies have reported unexpected moderate to good unilateral sound-localization abilities in unaided SSD listeners. Listening with a C-BCD might deteriorate these localization abilities because sounds are transmitted, through bone conduction to the contralateral normal hearing ear, and could thus interfere with monaural level cues (i.e. ambiguous monaural head-shadow cues), or with the subtle spectral localization cues, on which the listener has learned to rely on. The present study included nineteen SSD patients who were using their C-BCD for more than five months. To assess the use of the different localization cues, we investigated their localization abilities to broadband (BB, 0.5–20 kHz), low-pass (LP, 0.5–1.5 kHz), and high-pass filtered noises (HP, 3–20 kHz) of varying intensities. Experiments were performed in complete darkness, by measuring orienting head-movement responses under open-loop localization conditions. We demonstrate that a minority of listeners with SSD (5 out of 19) could localize BB and HP (but not LP) sounds in the horizontal plane in the unaided condition, and that a C-BCD did not deteriorate their localization abilities.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOocWw
via IFTTT

Editorial Board

Publication date: June 2018
Source:Hearing Research, Volume 363





from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8leU
via IFTTT

Spatial hearing ability of the pigmented guinea pig (Cavia porcellus): minimum audible angle and spatial release from masking in azimuth

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Nathanial T. Greene, Kelsey L. Anbuhl, Alexander T. Ferber, Marisa DeGuzman, Paul D. Allen, Daniel J. Tollin
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the “prepulse”) along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker “swap” paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed a reduction in startle amplitude (i.e., greater PPI) when the masker was presented at speaker locations near the chirp signal. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jNxvpt
via IFTTT

Towards an objective test of chronic tinnitus: Properties of auditory cortical potentials evoked by silent gaps in tinnitus-like sounds

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Brandon T. Paul, Marc Schoenwiesner, Sylvie Hébert
A common method designed to identify if an animal hears tinnitus assumes that tinnitus “fills-in” silent gaps in background sound. This phenomenon has not been reliably demonstrated in humans. One test of the gap-filling hypothesis would be to determine if gap-evoked cortical potentials are absent or attenuated when measured within background sound matched to the tinnitus sensation. However the tinnitus sensation is usually of low intensity and of high frequency, and it is unknown if cortical responses can be measured with such “weak” stimulus properties. Therefore the aim of the present study was to test the plausibility of observing these responses in the EEG in humans without tinnitus. Twelve non-tinnitus participants heard narrowband noises centered at sound frequencies of 5 or 10 kHz at sensation levels of either 5, 15, or 30 dB. Silent gaps of 20 ms duration were randomly inserted into noise stimuli, and cortical potentials evoked by these gaps were measured by 64-channel EEG. Gap-evoked cortical responses were statistically identifiable in all conditions for all but one participant. Responses were not significantly different between noise frequencies or levels. Results suggest that cortical responses can be measured when evoked by gaps in sounds that mirror acoustic properties of tinnitus. This design can validate the animal model and be used as a tinnitus diagnosis test in humans.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IelioI
via IFTTT

Subcortical pathways: Towards a better understanding of auditory disorders

Publication date: May 2018
Source:Hearing Research, Volume 362
Author(s): Richard A. Felix, Boris Gourévitch, Christine V. Portfors
Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders.



from #Audiology via ola Kala on Inoreader https://ift.tt/2nrEbvJ
via IFTTT

Why Does Language Not Emerge Until the Second Year?

Publication date: Available online 9 May 2018
Source:Hearing Research
Author(s): Rhodri Cusack, Conor J. Wild, Leire Zubiaurre-Elorza, Annika C. Linke
From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N=6) and 9 months (N=7), and in an adult comparison group (N=15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ielfcw
via IFTTT

Increased Spontaneous Firing Rates in Auditory Midbrain Following Noise Exposure Are Specifically Abolished by a Kv3 Channel Modulator

Publication date: Available online 30 April 2018
Source:Hearing Research
Author(s): Lucy A. Anderson, Lara L. Hesse, Nadia Pilati, Warren M.H. Bakay, Giuseppe Alvaro, Charles H. Large, David McAlpine, Roland Schaette, Jennifer F. Linden
Noise exposure has been shown to produce long-lasting increases in spontaneous activity in central auditory structures in animal models, and similar pathologies are thought to contribute to clinical phenomena such as hyperacusis or tinnitus in humans. Here we demonstrate that multi-unit spontaneous neuronal activity in the inferior colliculus (IC) of mice is significantly elevated four weeks following noise exposure at recording sites with frequency tuning within or near the noise exposure band, and this selective central auditory pathology can be normalised through administration of a novel compound that modulates activity of Kv3 voltage-gated ion channels. The compound had no statistically significant effect on IC spontaneous activity without noise exposure, nor on thresholds or frequency tuning of tone-evoked responses either with or without noise exposure. Administration of the compound produced some reduction in the magnitude of evoked responses to a broadband noise, but unlike effects on spontaneous rates, these effects on evoked responses were not specific to recording sites with frequency tuning within the noise exposure band. Thus, the results suggest that modulators of Kv3 channels can selectively counteract increases in spontaneous activity in the auditory midbrain associated with noise exposure.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I3Prux
via IFTTT

A “voice patch” system in the primate brain for processing vocal information?

Publication date: Available online 7 May 2018
Source:Hearing Research
Author(s): Pascal Belin, Clémentine Bodin, Virginia Aglieri
We review behavioural and neural evidence for the processing of information contained in conspecific vocalizations (CVs) in three primate species: humans, macaques and marmosets. We focus on abilities that are present and ecologically relevant in all three species: the detection and sensitivity to CVs; and the processing of identity cues in CVs. Current evidence, although fragmentary, supports the notion of a “voice patch system” in the primate brain analogous to the face patch system of visual cortex: a series of discrete, interconnected cortical areas supporting increasingly abstract representations of the vocal input. A central question concerns the degree to which the voice patch system is conserved in evolution. We outline challenges that arise and suggesting potential avenues for comparing the organization of the voice patch system across primate brains.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8aQM
via IFTTT

Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Jörg Lewald, Michael-Christian Schlüter, Stephan Getzmann
Neural mechanisms of selectively attending to a sound source of interest in a simulated “cocktail-party” situation, composed of multiple competing sources, were investigated using event-related potentials in combination with a spatial oddball design. Subjects either detected rare spatial deviants in a series of standard sounds or passively listened. Targets either appeared in isolation or in the presence of two distractor sound sources at different locations (“cocktail-party” condition). Deviant-minus-standard difference potentials revealed mismatch negativity, P3a, and P3b. However, mainly the P3b was modulated by spatial conditions of stimulation, with lower amplitude for “cocktail-party”, than single, sounds. In the active condition, cortical source localization revealed two distinct foci of maximum differences in electrical activity for the contrast of single vs. “cocktail-party” sounds: the right inferior frontal junction and the right anterior superior parietal lobule. These areas may be specifically involved in processes associated with selective attention in a “cocktail-party” situation.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader https://ift.tt/2I6iZYv
via IFTTT

Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response

Publication date: Available online 6 May 2018
Source:Hearing Research
Author(s): Barbara Ohlenforst, Dorothea Wendt, Sophia E. Kramer, Graham Naylor, Adriana A. Zekveld, Thomas Lunner
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, and 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to ‘giving up’ and ‘easy listening’, respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic83Vm
via IFTTT

Characterizing a novel vGlut3-P2A-iCreER knockin mouse strain in cochlea

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Chao Li, Yilai Shu, Guangqin Wang, He Zhang, Ying Lu, Xiang Li, Gen Li, Lei Song, Zhiyong Liu
Precise mouse genetic studies rely on specific tools that can label specific cell types. In mouse cochlea, previous studies suggest that vesicular glutamate transporter 3 (vGlut3), also known as Slc17a8, is specifically expressed in inner hair cells (IHCs) and loss of vGlut3 causes deafness. To take advantage of its unique expression pattern, here we generate a novel vGlut3-P2A-iCreER knockin mouse strain. The P2A-iCreER cassette is precisely inserted before stop codon of vGlut3, by which the endogenous vGlut3 is intact and paired with iCreER as well. Approximately, 10.7%, 85.6% and 41.8% of IHCs are tdtomato + when tamoxifen is given to vGlut3-P2A-iCreER/+; Rosa26-LSL-tdtomato/+ reporter strain at P2/P3, P10/P11 and P30/P31, respectively. Tdtomato + OHCs are never observed. Interestingly, besides IHCs, glia cells, but not spiral ganglion neurons (SGNs), are tdtomato+, which is further evidenced by the presence of Sox10+/tdtomato+ and tdtomato+/Prox1(Gata3 or Tuj1)-negative cells in SGN region. We further independently validate vGlut3 expression in SGN region by vGlut3 in situ hybridization and antibody staining. Moreover, total number of tdtomato + glia cells decreased gradually when tamoxifen is given from P2/P3 to P30/P31. Taken together, vGlut3-P2A-iCreER is an efficient genetic tool to specifically target IHCs for gene manipulation, which is complimentary to Prestin-CreER strain exclusively labelling cochlear outer hair cells (OHCs).



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOnWqw
via IFTTT

How aging impacts the encoding of binaural cues and the perception of auditory space

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): Ann Clock Eddins, Erol J. Ozmeral, David A. Eddins
Over the years, the effect of aging on auditory function has been investigated in animal models and humans in an effort to characterize age-related changes in both perception and physiology. Here, we review how aging may impact neural encoding and processing of binaural and spatial cues in human listeners with a focus on recent work by the authors as well as others. Age-related declines in monaural temporal processing, as estimated from measures of gap detection and temporal fine structure discrimination, have been associated with poorer performance on binaural tasks that require precise temporal processing. In lateralization and localization tasks, as well as in the detection of signals in noise, marked age-related changes have been demonstrated in both behavioral and electrophysiological measures and have been attributed to declines in neural synchrony and reduced central inhibition with advancing age. Evidence for such mechanisms, however, are influenced by the task (passive vs. attending) and the stimulus paradigm (e.g., static vs. continuous with dynamic change). That is, cortical auditory evoked potentials (CAEP) measured in response to static interaural time differences (ITDs) are larger in older versus younger listeners, consistent with reduced inhibition, while continuous stimuli with dynamic ITD changes lead to smaller responses in older compared to younger adults, suggestive of poorer neural synchrony. Additionally, the distribution of cortical activity is broader and less asymmetric in older than younger adults, consistent with the hemispheric asymmetry reduction in older adults model of cognitive aging. When older listeners attend to selected target locations in the free field, their CAEP components (N1, P2, P3) are again consistently smaller relative to younger listeners, and the reduced asymmetry in the distribution of cortical activity is maintained. As this research matures, proper neural biomarkers for changes in spatial hearing can provide objective evidence of impairment and targets for remediation. Future research should focus on the development and evaluation of effective approaches for remediating these spatial processing deficits associated with aging and hearing loss.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IaGtYz
via IFTTT

Animal model studies yield translational solutions for cochlear drug delivery

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): R.D. Frisina, M. Budzevich, X. Zhu, G.V. Martinez, J.P. Walton, D.A. Borkholder
The field of hearing and deafness research is about to enter an era where new cochlear drug delivery methodologies will become more innovative and plentiful. The present report provides a representative review of previous studies where efficacious results have been obtained with animal models, primarily rodents, for protection against acute hearing loss such as acoustic trauma due to noise overexposure, antibiotic use and cancer chemotherapies. These approaches were initiated using systemic injections or oral administrations of otoprotectants. Now, exciting new options for local drug delivery, which opens up the possibilities for utilization of novel otoprotective drugs or compounds that might not be suitable for systemic use, or might interfere with the efficacious actions of chemotherapeutic agents or antibiotics, are being developed. These include interesting use of nanoparticles (with or without magnetic field supplementation), hydrogels, cochlear micropumps, and new transtympanic injectable compounds, sometimes in combination with cochlear implants.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I4JaPo
via IFTTT

Eyes and ears: using eye tracking and pupillometry to understand challenges to speech recognition

Publication date: Available online 4 May 2018
Source:Hearing Research
Author(s): Kristin J. Van Engen, Drew J. McLaughlin
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Iel9BG
via IFTTT

Bone morphogenetic protein 4 antagonizes hair cell regeneration in the avian auditory epithelium

Publication date: Available online 2 May 2018
Source:Hearing Research
Author(s): Rebecca M. Lewis, Jesse J. Keller, Liangcai Wan, Jennifer S. Stone
Permanent hearing loss is often a result of damage to cochlear hair cells, which mammals are unable to regenerate. Non-mammalian vertebrates such as birds replace damaged hair cells and restore hearing function, but mechanisms controlling regeneration are not understood. The secreted protein bone morphogenetic protein 4 (BMP4) regulates inner ear morphogenesis and hair cell development. To investigate mechanisms controlling hair cell regeneration in birds, we examined expression and function of BMP4 in the auditory epithelia (basilar papillae) of chickens of either sex after hair cell destruction by ototoxic antibiotics. In mature basilar papillae, BMP4 mRNA is highly expressed in hair cells, but not in hair cell progenitors (supporting cells). Supporting cells transcribe genes encoding receptors for BMP4 (BMPR1A, BMPR1B, and BMPR2) and effectors of BMP4 signaling (ID transcription factors). Following hair cell destruction, BMP4 transcripts are lost from the sensory epithelium. Using organotypic cultures, we demonstrate that treatments with BMP4 during hair cell destruction prevent supporting cells from upregulating expression of the pro-hair cell transcription factor ATOH1, entering the cell cycle, and fully transdifferentiating into hair cells, but they do not induce cell death. By contrast, noggin, a BMP4 inhibitor, increases numbers of regenerated hair cells. These findings demonstrate that BMP4 antagonizes hair cell regeneration in the chicken basilar papilla, at least in part by preventing accumulation of ATOH1 in hair cell precursors.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I7Nptu
via IFTTT

Editorial Board

Publication date: May 2018
Source:Hearing Research, Volume 362





from #Audiology via ola Kala on Inoreader https://ift.tt/2I9KRa9
via IFTTT

Sound-localization performance of patients with single-sided deafness is not improved when listening with a bone-conduction device

Publication date: Available online 19 April 2018
Source:Hearing Research
Author(s): Martijn J.H. Agterberg, Ad F.M. Snik, Rens M.G. Van de Goor, Myrthe K.S. Hol, A. John Van Opstal
An increased number of treatment options has become available for patients with single sided deafness (SSD), who are seeking hearing rehabilitation. For example, bone-conduction devices that employ contralateral routing of sound (CROS), by transmitting acoustic bone vibrations from the deaf side to the cochlea of the hearing ear, are widely used. However, in some countries, cochlear implantation is becoming the standard treatment. The present study investigated whether CROS intervention, by means of a CROS bone-conduction device (C-BCD), affected sound-localization performance of patients with SSD. Several studies have reported unexpected moderate to good unilateral sound-localization abilities in unaided SSD listeners. Listening with a C-BCD might deteriorate these localization abilities because sounds are transmitted, through bone conduction to the contralateral normal hearing ear, and could thus interfere with monaural level cues (i.e. ambiguous monaural head-shadow cues), or with the subtle spectral localization cues, on which the listener has learned to rely on. The present study included nineteen SSD patients who were using their C-BCD for more than five months. To assess the use of the different localization cues, we investigated their localization abilities to broadband (BB, 0.5–20 kHz), low-pass (LP, 0.5–1.5 kHz), and high-pass filtered noises (HP, 3–20 kHz) of varying intensities. Experiments were performed in complete darkness, by measuring orienting head-movement responses under open-loop localization conditions. We demonstrate that a minority of listeners with SSD (5 out of 19) could localize BB and HP (but not LP) sounds in the horizontal plane in the unaided condition, and that a C-BCD did not deteriorate their localization abilities.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOocWw
via IFTTT

Editorial Board

Publication date: June 2018
Source:Hearing Research, Volume 363





from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8leU
via IFTTT

Spatial hearing ability of the pigmented guinea pig (Cavia porcellus): minimum audible angle and spatial release from masking in azimuth

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Nathanial T. Greene, Kelsey L. Anbuhl, Alexander T. Ferber, Marisa DeGuzman, Paul D. Allen, Daniel J. Tollin
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the “prepulse”) along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker “swap” paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed a reduction in startle amplitude (i.e., greater PPI) when the masker was presented at speaker locations near the chirp signal. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jNxvpt
via IFTTT

Towards an objective test of chronic tinnitus: Properties of auditory cortical potentials evoked by silent gaps in tinnitus-like sounds

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Brandon T. Paul, Marc Schoenwiesner, Sylvie Hébert
A common method designed to identify if an animal hears tinnitus assumes that tinnitus “fills-in” silent gaps in background sound. This phenomenon has not been reliably demonstrated in humans. One test of the gap-filling hypothesis would be to determine if gap-evoked cortical potentials are absent or attenuated when measured within background sound matched to the tinnitus sensation. However the tinnitus sensation is usually of low intensity and of high frequency, and it is unknown if cortical responses can be measured with such “weak” stimulus properties. Therefore the aim of the present study was to test the plausibility of observing these responses in the EEG in humans without tinnitus. Twelve non-tinnitus participants heard narrowband noises centered at sound frequencies of 5 or 10 kHz at sensation levels of either 5, 15, or 30 dB. Silent gaps of 20 ms duration were randomly inserted into noise stimuli, and cortical potentials evoked by these gaps were measured by 64-channel EEG. Gap-evoked cortical responses were statistically identifiable in all conditions for all but one participant. Responses were not significantly different between noise frequencies or levels. Results suggest that cortical responses can be measured when evoked by gaps in sounds that mirror acoustic properties of tinnitus. This design can validate the animal model and be used as a tinnitus diagnosis test in humans.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IelioI
via IFTTT

Subcortical pathways: Towards a better understanding of auditory disorders

Publication date: May 2018
Source:Hearing Research, Volume 362
Author(s): Richard A. Felix, Boris Gourévitch, Christine V. Portfors
Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders.



from #Audiology via ola Kala on Inoreader https://ift.tt/2nrEbvJ
via IFTTT

Why Does Language Not Emerge Until the Second Year?

Publication date: Available online 9 May 2018
Source:Hearing Research
Author(s): Rhodri Cusack, Conor J. Wild, Leire Zubiaurre-Elorza, Annika C. Linke
From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N=6) and 9 months (N=7), and in an adult comparison group (N=15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ielfcw
via IFTTT

Increased Spontaneous Firing Rates in Auditory Midbrain Following Noise Exposure Are Specifically Abolished by a Kv3 Channel Modulator

Publication date: Available online 30 April 2018
Source:Hearing Research
Author(s): Lucy A. Anderson, Lara L. Hesse, Nadia Pilati, Warren M.H. Bakay, Giuseppe Alvaro, Charles H. Large, David McAlpine, Roland Schaette, Jennifer F. Linden
Noise exposure has been shown to produce long-lasting increases in spontaneous activity in central auditory structures in animal models, and similar pathologies are thought to contribute to clinical phenomena such as hyperacusis or tinnitus in humans. Here we demonstrate that multi-unit spontaneous neuronal activity in the inferior colliculus (IC) of mice is significantly elevated four weeks following noise exposure at recording sites with frequency tuning within or near the noise exposure band, and this selective central auditory pathology can be normalised through administration of a novel compound that modulates activity of Kv3 voltage-gated ion channels. The compound had no statistically significant effect on IC spontaneous activity without noise exposure, nor on thresholds or frequency tuning of tone-evoked responses either with or without noise exposure. Administration of the compound produced some reduction in the magnitude of evoked responses to a broadband noise, but unlike effects on spontaneous rates, these effects on evoked responses were not specific to recording sites with frequency tuning within the noise exposure band. Thus, the results suggest that modulators of Kv3 channels can selectively counteract increases in spontaneous activity in the auditory midbrain associated with noise exposure.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I3Prux
via IFTTT

A “voice patch” system in the primate brain for processing vocal information?

Publication date: Available online 7 May 2018
Source:Hearing Research
Author(s): Pascal Belin, Clémentine Bodin, Virginia Aglieri
We review behavioural and neural evidence for the processing of information contained in conspecific vocalizations (CVs) in three primate species: humans, macaques and marmosets. We focus on abilities that are present and ecologically relevant in all three species: the detection and sensitivity to CVs; and the processing of identity cues in CVs. Current evidence, although fragmentary, supports the notion of a “voice patch system” in the primate brain analogous to the face patch system of visual cortex: a series of discrete, interconnected cortical areas supporting increasingly abstract representations of the vocal input. A central question concerns the degree to which the voice patch system is conserved in evolution. We outline challenges that arise and suggesting potential avenues for comparing the organization of the voice patch system across primate brains.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8aQM
via IFTTT

Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Jörg Lewald, Michael-Christian Schlüter, Stephan Getzmann
Neural mechanisms of selectively attending to a sound source of interest in a simulated “cocktail-party” situation, composed of multiple competing sources, were investigated using event-related potentials in combination with a spatial oddball design. Subjects either detected rare spatial deviants in a series of standard sounds or passively listened. Targets either appeared in isolation or in the presence of two distractor sound sources at different locations (“cocktail-party” condition). Deviant-minus-standard difference potentials revealed mismatch negativity, P3a, and P3b. However, mainly the P3b was modulated by spatial conditions of stimulation, with lower amplitude for “cocktail-party”, than single, sounds. In the active condition, cortical source localization revealed two distinct foci of maximum differences in electrical activity for the contrast of single vs. “cocktail-party” sounds: the right inferior frontal junction and the right anterior superior parietal lobule. These areas may be specifically involved in processes associated with selective attention in a “cocktail-party” situation.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader https://ift.tt/2I6iZYv
via IFTTT

Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response

Publication date: Available online 6 May 2018
Source:Hearing Research
Author(s): Barbara Ohlenforst, Dorothea Wendt, Sophia E. Kramer, Graham Naylor, Adriana A. Zekveld, Thomas Lunner
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, and 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to ‘giving up’ and ‘easy listening’, respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic83Vm
via IFTTT

Characterizing a novel vGlut3-P2A-iCreER knockin mouse strain in cochlea

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Chao Li, Yilai Shu, Guangqin Wang, He Zhang, Ying Lu, Xiang Li, Gen Li, Lei Song, Zhiyong Liu
Precise mouse genetic studies rely on specific tools that can label specific cell types. In mouse cochlea, previous studies suggest that vesicular glutamate transporter 3 (vGlut3), also known as Slc17a8, is specifically expressed in inner hair cells (IHCs) and loss of vGlut3 causes deafness. To take advantage of its unique expression pattern, here we generate a novel vGlut3-P2A-iCreER knockin mouse strain. The P2A-iCreER cassette is precisely inserted before stop codon of vGlut3, by which the endogenous vGlut3 is intact and paired with iCreER as well. Approximately, 10.7%, 85.6% and 41.8% of IHCs are tdtomato + when tamoxifen is given to vGlut3-P2A-iCreER/+; Rosa26-LSL-tdtomato/+ reporter strain at P2/P3, P10/P11 and P30/P31, respectively. Tdtomato + OHCs are never observed. Interestingly, besides IHCs, glia cells, but not spiral ganglion neurons (SGNs), are tdtomato+, which is further evidenced by the presence of Sox10+/tdtomato+ and tdtomato+/Prox1(Gata3 or Tuj1)-negative cells in SGN region. We further independently validate vGlut3 expression in SGN region by vGlut3 in situ hybridization and antibody staining. Moreover, total number of tdtomato + glia cells decreased gradually when tamoxifen is given from P2/P3 to P30/P31. Taken together, vGlut3-P2A-iCreER is an efficient genetic tool to specifically target IHCs for gene manipulation, which is complimentary to Prestin-CreER strain exclusively labelling cochlear outer hair cells (OHCs).



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOnWqw
via IFTTT

How aging impacts the encoding of binaural cues and the perception of auditory space

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): Ann Clock Eddins, Erol J. Ozmeral, David A. Eddins
Over the years, the effect of aging on auditory function has been investigated in animal models and humans in an effort to characterize age-related changes in both perception and physiology. Here, we review how aging may impact neural encoding and processing of binaural and spatial cues in human listeners with a focus on recent work by the authors as well as others. Age-related declines in monaural temporal processing, as estimated from measures of gap detection and temporal fine structure discrimination, have been associated with poorer performance on binaural tasks that require precise temporal processing. In lateralization and localization tasks, as well as in the detection of signals in noise, marked age-related changes have been demonstrated in both behavioral and electrophysiological measures and have been attributed to declines in neural synchrony and reduced central inhibition with advancing age. Evidence for such mechanisms, however, are influenced by the task (passive vs. attending) and the stimulus paradigm (e.g., static vs. continuous with dynamic change). That is, cortical auditory evoked potentials (CAEP) measured in response to static interaural time differences (ITDs) are larger in older versus younger listeners, consistent with reduced inhibition, while continuous stimuli with dynamic ITD changes lead to smaller responses in older compared to younger adults, suggestive of poorer neural synchrony. Additionally, the distribution of cortical activity is broader and less asymmetric in older than younger adults, consistent with the hemispheric asymmetry reduction in older adults model of cognitive aging. When older listeners attend to selected target locations in the free field, their CAEP components (N1, P2, P3) are again consistently smaller relative to younger listeners, and the reduced asymmetry in the distribution of cortical activity is maintained. As this research matures, proper neural biomarkers for changes in spatial hearing can provide objective evidence of impairment and targets for remediation. Future research should focus on the development and evaluation of effective approaches for remediating these spatial processing deficits associated with aging and hearing loss.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IaGtYz
via IFTTT

Animal model studies yield translational solutions for cochlear drug delivery

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): R.D. Frisina, M. Budzevich, X. Zhu, G.V. Martinez, J.P. Walton, D.A. Borkholder
The field of hearing and deafness research is about to enter an era where new cochlear drug delivery methodologies will become more innovative and plentiful. The present report provides a representative review of previous studies where efficacious results have been obtained with animal models, primarily rodents, for protection against acute hearing loss such as acoustic trauma due to noise overexposure, antibiotic use and cancer chemotherapies. These approaches were initiated using systemic injections or oral administrations of otoprotectants. Now, exciting new options for local drug delivery, which opens up the possibilities for utilization of novel otoprotective drugs or compounds that might not be suitable for systemic use, or might interfere with the efficacious actions of chemotherapeutic agents or antibiotics, are being developed. These include interesting use of nanoparticles (with or without magnetic field supplementation), hydrogels, cochlear micropumps, and new transtympanic injectable compounds, sometimes in combination with cochlear implants.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I4JaPo
via IFTTT

Eyes and ears: using eye tracking and pupillometry to understand challenges to speech recognition

Publication date: Available online 4 May 2018
Source:Hearing Research
Author(s): Kristin J. Van Engen, Drew J. McLaughlin
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Iel9BG
via IFTTT

Bone morphogenetic protein 4 antagonizes hair cell regeneration in the avian auditory epithelium

Publication date: Available online 2 May 2018
Source:Hearing Research
Author(s): Rebecca M. Lewis, Jesse J. Keller, Liangcai Wan, Jennifer S. Stone
Permanent hearing loss is often a result of damage to cochlear hair cells, which mammals are unable to regenerate. Non-mammalian vertebrates such as birds replace damaged hair cells and restore hearing function, but mechanisms controlling regeneration are not understood. The secreted protein bone morphogenetic protein 4 (BMP4) regulates inner ear morphogenesis and hair cell development. To investigate mechanisms controlling hair cell regeneration in birds, we examined expression and function of BMP4 in the auditory epithelia (basilar papillae) of chickens of either sex after hair cell destruction by ototoxic antibiotics. In mature basilar papillae, BMP4 mRNA is highly expressed in hair cells, but not in hair cell progenitors (supporting cells). Supporting cells transcribe genes encoding receptors for BMP4 (BMPR1A, BMPR1B, and BMPR2) and effectors of BMP4 signaling (ID transcription factors). Following hair cell destruction, BMP4 transcripts are lost from the sensory epithelium. Using organotypic cultures, we demonstrate that treatments with BMP4 during hair cell destruction prevent supporting cells from upregulating expression of the pro-hair cell transcription factor ATOH1, entering the cell cycle, and fully transdifferentiating into hair cells, but they do not induce cell death. By contrast, noggin, a BMP4 inhibitor, increases numbers of regenerated hair cells. These findings demonstrate that BMP4 antagonizes hair cell regeneration in the chicken basilar papilla, at least in part by preventing accumulation of ATOH1 in hair cell precursors.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I7Nptu
via IFTTT

Editorial Board

Publication date: May 2018
Source:Hearing Research, Volume 362





from #Audiology via ola Kala on Inoreader https://ift.tt/2I9KRa9
via IFTTT

Tone-Evoked Acoustic Change Complex (ACC) Recorded in a Sedated Animal Model

Abstract

The acoustic change complex (ACC) is a scalp-recorded cortical evoked potential complex generated in response to changes (e.g., frequency, amplitude) in an auditory stimulus. The ACC has been well studied in humans, but to our knowledge, no animal model has been evaluated. In particular, it was not known whether the ACC could be recorded under the conditions of sedation that likely would be necessary for recordings from animals. For that reason, we tested the feasibility of recording ACC from sedated cats in response to changes of frequency and amplitude of pure-tone stimuli. Cats were sedated with ketamine and acepromazine, and subdermal needle electrodes were used to record electroencephalographic (EEG) activity. Tones were presented from a small loudspeaker located near the right ear. Continuous tones alternated at 500-ms intervals between two frequencies or two levels. Neurometric functions were created by recording neural response amplitudes while systematically varying the magnitude of steps in frequency centered in octave frequency around 2, 4, 8, and 16 kHz, all at 75 dB SPL, or in decibel level around 75 dB SPL tested at 4 and 8 kHz. The ACC could be recorded readily under this ketamine/azepromazine sedation. In contrast, ACC could not be recorded reliably under any level of isoflurane anesthesia that was tested. The minimum frequency (expressed as Weber fractions (df/f)) or level steps (expressed in dB) needed to elicit ACC fell in the range of previous thresholds reported in animal psychophysical tests of discrimination. The success in recording ACC in sedated animals suggests that the ACC will be a useful tool for evaluation of other aspects of auditory acuity in normal hearing and, presumably, in electrical cochlear stimulation, especially for novel stimulation modes that are not yet feasible in humans.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2G5FNSm
via IFTTT