Παρασκευή 11 Μαΐου 2018

Local Drug Delivery to the Inner Ear: Principles, Practice, and Future Challenges

S03785955.gif

Publication date: Available online 11 May 2018
Source:Hearing Research
Author(s): Lawrence R. Lustig




from #Audiology via ola Kala on Inoreader https://ift.tt/2rAjCPI
via IFTTT

Local Drug Delivery to the Inner Ear: Principles, Practice, and Future Challenges

Publication date: Available online 11 May 2018
Source:Hearing Research
Author(s): Lawrence R. Lustig




from #Audiology via ola Kala on Inoreader https://ift.tt/2rAjCPI
via IFTTT

Local Drug Delivery to the Inner Ear: Principles, Practice, and Future Challenges

S03785955.gif

Publication date: Available online 11 May 2018
Source:Hearing Research
Author(s): Lawrence R. Lustig




from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rAjCPI
via IFTTT

Local Drug Delivery to the Inner Ear: Principles, Practice, and Future Challenges

S03785955.gif

Publication date: Available online 11 May 2018
Source:Hearing Research
Author(s): Lawrence R. Lustig




from #Audiology via ola Kala on Inoreader https://ift.tt/2rAjCPI
via IFTTT

Local Drug Delivery to the Inner Ear: Principles, Practice, and Future Challenges

Publication date: Available online 11 May 2018
Source:Hearing Research
Author(s): Lawrence R. Lustig




from #Audiology via ola Kala on Inoreader https://ift.tt/2rAjCPI
via IFTTT

MED-EL’s Non-Surgical Bone Conduction Hearing Device Receives FDA Clearance

adhesive+adhear.jpgMED-EL (http://www.medel.com/us/) has received FDA clearance for its new non-surgical bone conduction hearing device ADHEAR, designed for those with conductive hearing loss and single-sided hearing loss. To use ADHEAR, a patented adhesive adapter is placed onto the skin behind the ear, and is worn for three to seven days at a time. The lightweight audio processor can be clicked on and off the adapter each day. It picks up sound waves, converts them into vibrations and transmits them onto the bone via the adhesive adaptor. The bone then transfers the vibrations through the skull to the inner ear, where they are processed as normal sounds. ADHEAR comfortably stays in position without applying pressure onto the skin. MED-EL acquired the device’s technology from the Swedish medical device company Otorix in 2016 and further developed it. The company anticipates that ADHEAR will be available in the summer of 2018, and will provide training for hearing health professionals throughout the country.

Published: 5/11/2018 11:49:00 AM


from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jOfQOs
via IFTTT

MED-EL’s Non-Surgical Bone Conduction Hearing Device Receives FDA Clearance

adhesive+adhear.jpgMED-EL (http://www.medel.com/us/) has received FDA clearance for its new non-surgical bone conduction hearing device ADHEAR, designed for those with conductive hearing loss and single-sided hearing loss. To use ADHEAR, a patented adhesive adapter is placed onto the skin behind the ear, and is worn for three to seven days at a time. The lightweight audio processor can be clicked on and off the adapter each day. It picks up sound waves, converts them into vibrations and transmits them onto the bone via the adhesive adaptor. The bone then transfers the vibrations through the skull to the inner ear, where they are processed as normal sounds. ADHEAR comfortably stays in position without applying pressure onto the skin. MED-EL acquired the device’s technology from the Swedish medical device company Otorix in 2016 and further developed it. The company anticipates that ADHEAR will be available in the summer of 2018, and will provide training for hearing health professionals throughout the country.

Published: 5/11/2018 11:49:00 AM


from #Audiology via ola Kala on Inoreader https://ift.tt/2jOfQOs
via IFTTT

MED-EL’s Non-Surgical Bone Conduction Hearing Device Receives FDA Clearance

adhesive+adhear.jpgMED-EL (http://www.medel.com/us/) has received FDA clearance for its new non-surgical bone conduction hearing device ADHEAR, designed for those with conductive hearing loss and single-sided hearing loss. To use ADHEAR, a patented adhesive adapter is placed onto the skin behind the ear, and is worn for three to seven days at a time. The lightweight audio processor can be clicked on and off the adapter each day. It picks up sound waves, converts them into vibrations and transmits them onto the bone via the adhesive adaptor. The bone then transfers the vibrations through the skull to the inner ear, where they are processed as normal sounds. ADHEAR comfortably stays in position without applying pressure onto the skin. MED-EL acquired the device’s technology from the Swedish medical device company Otorix in 2016 and further developed it. The company anticipates that ADHEAR will be available in the summer of 2018, and will provide training for hearing health professionals throughout the country.

Published: 5/11/2018 11:49:00 AM


from #Audiology via ola Kala on Inoreader https://ift.tt/2jOfQOs
via IFTTT

Congratulations to NSSLHA for earning Gold Chapter Honors, and to Diane Guerrero for winning the National NSSLHA Member Excellence Award!

student portraitDiane GuerreroThe SDSU chapter of NSSLHA earned Gold Chapter Honors from the National NSSLHA Executive Council!  SDSU’s chapter is recognized for:

  • Increasing awareness of communication disorders among state and federal legislators
  • Supporting clients, students, and organizations in the community
  • Creating vibrant online conversations in the NSSLHA community
  • Providing scholarships to students in CSD programs by contributing to a donation of more than $10,000 to the ASHFoundation NSSLHA Scholarship
  • Raising more than $15,250 for the John Tracy Clinic (the NSSLHA Loves 2017–18 recipient)

SLHS Master’s student Diane Guerrero earned the NSSLHA Member Honors – Excellence Award (SLP) for her outstanding accomplishments. To celebrate her efforts, she will be:

  • Featured on NSSLHA’s website and in the NSSLHA Updates e-newsletter
  • Awarded $500

Congratulations to all!



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rBpxDG
via IFTTT

Congratulations to NSSLHA for earning Gold Chapter Honors, and to Diane Guerrero for winning the National NSSLHA Member Excellence Award!

student portraitDiane GuerreroThe SDSU chapter of NSSLHA earned Gold Chapter Honors from the National NSSLHA Executive Council!  SDSU’s chapter is recognized for:

  • Increasing awareness of communication disorders among state and federal legislators
  • Supporting clients, students, and organizations in the community
  • Creating vibrant online conversations in the NSSLHA community
  • Providing scholarships to students in CSD programs by contributing to a donation of more than $10,000 to the ASHFoundation NSSLHA Scholarship
  • Raising more than $15,250 for the John Tracy Clinic (the NSSLHA Loves 2017–18 recipient)

SLHS Master’s student Diane Guerrero earned the NSSLHA Member Honors – Excellence Award (SLP) for her outstanding accomplishments. To celebrate her efforts, she will be:

  • Featured on NSSLHA’s website and in the NSSLHA Updates e-newsletter
  • Awarded $500

Congratulations to all!



from #Audiology via ola Kala on Inoreader https://ift.tt/2rBpxDG
via IFTTT

Congratulations to NSSLHA for earning Gold Chapter Honors, and to Diane Guerrero for winning the National NSSLHA Member Excellence Award!

student portraitDiane GuerreroThe SDSU chapter of NSSLHA earned Gold Chapter Honors from the National NSSLHA Executive Council!  SDSU’s chapter is recognized for:

  • Increasing awareness of communication disorders among state and federal legislators
  • Supporting clients, students, and organizations in the community
  • Creating vibrant online conversations in the NSSLHA community
  • Providing scholarships to students in CSD programs by contributing to a donation of more than $10,000 to the ASHFoundation NSSLHA Scholarship
  • Raising more than $15,250 for the John Tracy Clinic (the NSSLHA Loves 2017–18 recipient)

SLHS Master’s student Diane Guerrero earned the NSSLHA Member Honors – Excellence Award (SLP) for her outstanding accomplishments. To celebrate her efforts, she will be:

  • Featured on NSSLHA’s website and in the NSSLHA Updates e-newsletter
  • Awarded $500

Congratulations to all!



from #Audiology via ola Kala on Inoreader https://ift.tt/2rBpxDG
via IFTTT

Chronic lead exposure induces cochlear oxidative stress and potentiates noise-induced hearing loss.

Chronic lead exposure induces cochlear oxidative stress and potentiates noise-induced hearing loss.

Toxicol Lett. 2018 May 07;:

Authors: Jamesdaniel S, Rosati R, Westrick J, Ruden DM

Abstract
Acquired hearing loss is caused by complex interactions of multiple environmental risk factors, such as elevated levels of lead and noise, which are prevalent in urban communities. This study delineates the mechanism underlying lead-induced auditory dysfunction and its potential interaction with noise exposure. Young-adult C57BL/6 mice were exposed to: 1) control conditions; 2) 2 mM lead acetate in drinking water for 28 days; 3) 90 dB broadband noise 2 h/day for two weeks; and 4) both lead and noise. Blood lead levels were measured by inductively coupled plasma mass spectrometry analysis (ICP-MS) lead-induced cochlear oxidative stress signaling was assessed using targeted gene arrays, and the hearing thresholds were assessed by recording auditory brainstem responses. Chronic lead exposure downregulated cochlear Sod1, Gpx1, and Gstk1, which encode critical antioxidant enzymes, and upregulated ApoE, Hspa1a, Ercc2, Prnp, Ccl5, and Sqstm1, which are indicative of cellular apoptosis. Isolated exposure to lead or noise induced 8-12 dB and 11-25 dB shifts in hearing thresholds, respectively. Combined exposure induced 18-30 dB shifts, which was significantly higher than that observed with isolated exposures. This study suggests that chronic exposure to lead induces cochlear oxidative stress and potentiates noise-induced hearing impairment, possibly through parallel pathways.

PMID: 29746905 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ibpc5x
via IFTTT

Speech Understanding in Noise for Adults With Cochlear Implants: Effects of Hearing Configuration, Source Location Certainty, and Head Movement

Purpose
The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow.
Method
Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed.
Results
(a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear).
Conclusions
Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KRf5AH
via IFTTT

Grammatical Word Production Across Metrical Contexts in School-Aged Children's and Adults' Speech

Purpose
The purpose of this study is to test whether age-related differences in grammatical word production are due to differences in how children and adults chunk speech for output or to immature articulatory timing control in children.
Method
Two groups of 12 children, 5 and 8 years old, and 1 group of 12 adults produced sentences with phrase-medial determiners. Preceding verbs were varied to create different metrical contexts for chunking the determiner with an adjacent content word. Following noun onsets were varied to assess the coherence of determiner–noun sequences. Determiner vowel duration, amplitude, and formant frequencies were measured.
Results
Children produced significantly longer and louder determiners than adults regardless of metrical context. The effect of noun onset on F1 was stronger in children's speech than in adults' speech; the effect of noun onset on F2 was stronger in adults' speech than in children's. Effects of metrical context on anticipatory formant patterns were more evident in children's speech than in adults' speech.
Conclusion
The results suggest that both immature articulatory timing control and age-related differences in how chunks are accessed or planned influence grammatical word production in school-aged children's speech. Future work will focus on the development of long-distance coarticulation to reveal the evolution of speech plan structure over time.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rAqBr5
via IFTTT

Sound-localization performance of patients with single-sided deafness is not improved when listening with a bone-conduction device

Publication date: Available online 19 April 2018
Source:Hearing Research
Author(s): Martijn J.H. Agterberg, Ad F.M. Snik, Rens M.G. Van de Goor, Myrthe K.S. Hol, A. John Van Opstal
An increased number of treatment options has become available for patients with single sided deafness (SSD), who are seeking hearing rehabilitation. For example, bone-conduction devices that employ contralateral routing of sound (CROS), by transmitting acoustic bone vibrations from the deaf side to the cochlea of the hearing ear, are widely used. However, in some countries, cochlear implantation is becoming the standard treatment. The present study investigated whether CROS intervention, by means of a CROS bone-conduction device (C-BCD), affected sound-localization performance of patients with SSD. Several studies have reported unexpected moderate to good unilateral sound-localization abilities in unaided SSD listeners. Listening with a C-BCD might deteriorate these localization abilities because sounds are transmitted, through bone conduction to the contralateral normal hearing ear, and could thus interfere with monaural level cues (i.e. ambiguous monaural head-shadow cues), or with the subtle spectral localization cues, on which the listener has learned to rely on. The present study included nineteen SSD patients who were using their C-BCD for more than five months. To assess the use of the different localization cues, we investigated their localization abilities to broadband (BB, 0.5–20 kHz), low-pass (LP, 0.5–1.5 kHz), and high-pass filtered noises (HP, 3–20 kHz) of varying intensities. Experiments were performed in complete darkness, by measuring orienting head-movement responses under open-loop localization conditions. We demonstrate that a minority of listeners with SSD (5 out of 19) could localize BB and HP (but not LP) sounds in the horizontal plane in the unaided condition, and that a C-BCD did not deteriorate their localization abilities.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOocWw
via IFTTT

Editorial Board

alertIcon.gif

Publication date: June 2018
Source:Hearing Research, Volume 363





from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8leU
via IFTTT

Spatial hearing ability of the pigmented guinea pig (Cavia porcellus): minimum audible angle and spatial release from masking in azimuth

S03785955.gif

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Nathanial T. Greene, Kelsey L. Anbuhl, Alexander T. Ferber, Marisa DeGuzman, Paul D. Allen, Daniel J. Tollin
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the “prepulse”) along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker “swap” paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed a reduction in startle amplitude (i.e., greater PPI) when the masker was presented at speaker locations near the chirp signal. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jNxvpt
via IFTTT

Towards an objective test of chronic tinnitus: Properties of auditory cortical potentials evoked by silent gaps in tinnitus-like sounds

elsevier-non-solus.png

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Brandon T. Paul, Marc Schoenwiesner, Sylvie Hébert
A common method designed to identify if an animal hears tinnitus assumes that tinnitus “fills-in” silent gaps in background sound. This phenomenon has not been reliably demonstrated in humans. One test of the gap-filling hypothesis would be to determine if gap-evoked cortical potentials are absent or attenuated when measured within background sound matched to the tinnitus sensation. However the tinnitus sensation is usually of low intensity and of high frequency, and it is unknown if cortical responses can be measured with such “weak” stimulus properties. Therefore the aim of the present study was to test the plausibility of observing these responses in the EEG in humans without tinnitus. Twelve non-tinnitus participants heard narrowband noises centered at sound frequencies of 5 or 10 kHz at sensation levels of either 5, 15, or 30 dB. Silent gaps of 20 ms duration were randomly inserted into noise stimuli, and cortical potentials evoked by these gaps were measured by 64-channel EEG. Gap-evoked cortical responses were statistically identifiable in all conditions for all but one participant. Responses were not significantly different between noise frequencies or levels. Results suggest that cortical responses can be measured when evoked by gaps in sounds that mirror acoustic properties of tinnitus. This design can validate the animal model and be used as a tinnitus diagnosis test in humans.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IelioI
via IFTTT

Why Does Language Not Emerge Until the Second Year?

S03785955.gif

Publication date: Available online 9 May 2018
Source:Hearing Research
Author(s): Rhodri Cusack, Conor J. Wild, Leire Zubiaurre-Elorza, Annika C. Linke
From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N=6) and 9 months (N=7), and in an adult comparison group (N=15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ielfcw
via IFTTT

Increased Spontaneous Firing Rates in Auditory Midbrain Following Noise Exposure Are Specifically Abolished by a Kv3 Channel Modulator

S03785955.gif

Publication date: Available online 30 April 2018
Source:Hearing Research
Author(s): Lucy A. Anderson, Lara L. Hesse, Nadia Pilati, Warren M.H. Bakay, Giuseppe Alvaro, Charles H. Large, David McAlpine, Roland Schaette, Jennifer F. Linden
Noise exposure has been shown to produce long-lasting increases in spontaneous activity in central auditory structures in animal models, and similar pathologies are thought to contribute to clinical phenomena such as hyperacusis or tinnitus in humans. Here we demonstrate that multi-unit spontaneous neuronal activity in the inferior colliculus (IC) of mice is significantly elevated four weeks following noise exposure at recording sites with frequency tuning within or near the noise exposure band, and this selective central auditory pathology can be normalised through administration of a novel compound that modulates activity of Kv3 voltage-gated ion channels. The compound had no statistically significant effect on IC spontaneous activity without noise exposure, nor on thresholds or frequency tuning of tone-evoked responses either with or without noise exposure. Administration of the compound produced some reduction in the magnitude of evoked responses to a broadband noise, but unlike effects on spontaneous rates, these effects on evoked responses were not specific to recording sites with frequency tuning within the noise exposure band. Thus, the results suggest that modulators of Kv3 channels can selectively counteract increases in spontaneous activity in the auditory midbrain associated with noise exposure.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I3Prux
via IFTTT

A “voice patch” system in the primate brain for processing vocal information?

S03785955.gif

Publication date: Available online 7 May 2018
Source:Hearing Research
Author(s): Pascal Belin, Clémentine Bodin, Virginia Aglieri
We review behavioural and neural evidence for the processing of information contained in conspecific vocalizations (CVs) in three primate species: humans, macaques and marmosets. We focus on abilities that are present and ecologically relevant in all three species: the detection and sensitivity to CVs; and the processing of identity cues in CVs. Current evidence, although fragmentary, supports the notion of a “voice patch system” in the primate brain analogous to the face patch system of visual cortex: a series of discrete, interconnected cortical areas supporting increasingly abstract representations of the vocal input. A central question concerns the degree to which the voice patch system is conserved in evolution. We outline challenges that arise and suggesting potential avenues for comparing the organization of the voice patch system across primate brains.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8aQM
via IFTTT

Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Jörg Lewald, Michael-Christian Schlüter, Stephan Getzmann
Neural mechanisms of selectively attending to a sound source of interest in a simulated “cocktail-party” situation, composed of multiple competing sources, were investigated using event-related potentials in combination with a spatial oddball design. Subjects either detected rare spatial deviants in a series of standard sounds or passively listened. Targets either appeared in isolation or in the presence of two distractor sound sources at different locations (“cocktail-party” condition). Deviant-minus-standard difference potentials revealed mismatch negativity, P3a, and P3b. However, mainly the P3b was modulated by spatial conditions of stimulation, with lower amplitude for “cocktail-party”, than single, sounds. In the active condition, cortical source localization revealed two distinct foci of maximum differences in electrical activity for the contrast of single vs. “cocktail-party” sounds: the right inferior frontal junction and the right anterior superior parietal lobule. These areas may be specifically involved in processes associated with selective attention in a “cocktail-party” situation.

Graphical abstract

image


from #Audiology via ola Kala on Inoreader https://ift.tt/2I6iZYv
via IFTTT

Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response

S03785955.gif

Publication date: Available online 6 May 2018
Source:Hearing Research
Author(s): Barbara Ohlenforst, Dorothea Wendt, Sophia E. Kramer, Graham Naylor, Adriana A. Zekveld, Thomas Lunner
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, and 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to ‘giving up’ and ‘easy listening’, respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic83Vm
via IFTTT

Characterizing a novel vGlut3-P2A-iCreER knockin mouse strain in cochlea

elsevier-non-solus.png

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Chao Li, Yilai Shu, Guangqin Wang, He Zhang, Ying Lu, Xiang Li, Gen Li, Lei Song, Zhiyong Liu
Precise mouse genetic studies rely on specific tools that can label specific cell types. In mouse cochlea, previous studies suggest that vesicular glutamate transporter 3 (vGlut3), also known as Slc17a8, is specifically expressed in inner hair cells (IHCs) and loss of vGlut3 causes deafness. To take advantage of its unique expression pattern, here we generate a novel vGlut3-P2A-iCreER knockin mouse strain. The P2A-iCreER cassette is precisely inserted before stop codon of vGlut3, by which the endogenous vGlut3 is intact and paired with iCreER as well. Approximately, 10.7%, 85.6% and 41.8% of IHCs are tdtomato + when tamoxifen is given to vGlut3-P2A-iCreER/+; Rosa26-LSL-tdtomato/+ reporter strain at P2/P3, P10/P11 and P30/P31, respectively. Tdtomato + OHCs are never observed. Interestingly, besides IHCs, glia cells, but not spiral ganglion neurons (SGNs), are tdtomato+, which is further evidenced by the presence of Sox10+/tdtomato+ and tdtomato+/Prox1(Gata3 or Tuj1)-negative cells in SGN region. We further independently validate vGlut3 expression in SGN region by vGlut3 in situ hybridization and antibody staining. Moreover, total number of tdtomato + glia cells decreased gradually when tamoxifen is given from P2/P3 to P30/P31. Taken together, vGlut3-P2A-iCreER is an efficient genetic tool to specifically target IHCs for gene manipulation, which is complimentary to Prestin-CreER strain exclusively labelling cochlear outer hair cells (OHCs).



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOnWqw
via IFTTT

How aging impacts the encoding of binaural cues and the perception of auditory space

S03785955.gif

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): Ann Clock Eddins, Erol J. Ozmeral, David A. Eddins
Over the years, the effect of aging on auditory function has been investigated in animal models and humans in an effort to characterize age-related changes in both perception and physiology. Here, we review how aging may impact neural encoding and processing of binaural and spatial cues in human listeners with a focus on recent work by the authors as well as others. Age-related declines in monaural temporal processing, as estimated from measures of gap detection and temporal fine structure discrimination, have been associated with poorer performance on binaural tasks that require precise temporal processing. In lateralization and localization tasks, as well as in the detection of signals in noise, marked age-related changes have been demonstrated in both behavioral and electrophysiological measures and have been attributed to declines in neural synchrony and reduced central inhibition with advancing age. Evidence for such mechanisms, however, are influenced by the task (passive vs. attending) and the stimulus paradigm (e.g., static vs. continuous with dynamic change). That is, cortical auditory evoked potentials (CAEP) measured in response to static interaural time differences (ITDs) are larger in older versus younger listeners, consistent with reduced inhibition, while continuous stimuli with dynamic ITD changes lead to smaller responses in older compared to younger adults, suggestive of poorer neural synchrony. Additionally, the distribution of cortical activity is broader and less asymmetric in older than younger adults, consistent with the hemispheric asymmetry reduction in older adults model of cognitive aging. When older listeners attend to selected target locations in the free field, their CAEP components (N1, P2, P3) are again consistently smaller relative to younger listeners, and the reduced asymmetry in the distribution of cortical activity is maintained. As this research matures, proper neural biomarkers for changes in spatial hearing can provide objective evidence of impairment and targets for remediation. Future research should focus on the development and evaluation of effective approaches for remediating these spatial processing deficits associated with aging and hearing loss.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IaGtYz
via IFTTT

Animal model studies yield translational solutions for cochlear drug delivery

S03785955.gif

Publication date: Available online 5 May 2018
Source:Hearing Research
Author(s): R.D. Frisina, M. Budzevich, X. Zhu, G.V. Martinez, J.P. Walton, D.A. Borkholder
The field of hearing and deafness research is about to enter an era where new cochlear drug delivery methodologies will become more innovative and plentiful. The present report provides a representative review of previous studies where efficacious results have been obtained with animal models, primarily rodents, for protection against acute hearing loss such as acoustic trauma due to noise overexposure, antibiotic use and cancer chemotherapies. These approaches were initiated using systemic injections or oral administrations of otoprotectants. Now, exciting new options for local drug delivery, which opens up the possibilities for utilization of novel otoprotective drugs or compounds that might not be suitable for systemic use, or might interfere with the efficacious actions of chemotherapeutic agents or antibiotics, are being developed. These include interesting use of nanoparticles (with or without magnetic field supplementation), hydrogels, cochlear micropumps, and new transtympanic injectable compounds, sometimes in combination with cochlear implants.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I4JaPo
via IFTTT

Eyes and ears: using eye tracking and pupillometry to understand challenges to speech recognition

S03785955.gif

Publication date: Available online 4 May 2018
Source:Hearing Research
Author(s): Kristin J. Van Engen, Drew J. McLaughlin
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Iel9BG
via IFTTT

Bone morphogenetic protein 4 antagonizes hair cell regeneration in the avian auditory epithelium

S03785955.gif

Publication date: Available online 2 May 2018
Source:Hearing Research
Author(s): Rebecca M. Lewis, Jesse J. Keller, Liangcai Wan, Jennifer S. Stone
Permanent hearing loss is often a result of damage to cochlear hair cells, which mammals are unable to regenerate. Non-mammalian vertebrates such as birds replace damaged hair cells and restore hearing function, but mechanisms controlling regeneration are not understood. The secreted protein bone morphogenetic protein 4 (BMP4) regulates inner ear morphogenesis and hair cell development. To investigate mechanisms controlling hair cell regeneration in birds, we examined expression and function of BMP4 in the auditory epithelia (basilar papillae) of chickens of either sex after hair cell destruction by ototoxic antibiotics. In mature basilar papillae, BMP4 mRNA is highly expressed in hair cells, but not in hair cell progenitors (supporting cells). Supporting cells transcribe genes encoding receptors for BMP4 (BMPR1A, BMPR1B, and BMPR2) and effectors of BMP4 signaling (ID transcription factors). Following hair cell destruction, BMP4 transcripts are lost from the sensory epithelium. Using organotypic cultures, we demonstrate that treatments with BMP4 during hair cell destruction prevent supporting cells from upregulating expression of the pro-hair cell transcription factor ATOH1, entering the cell cycle, and fully transdifferentiating into hair cells, but they do not induce cell death. By contrast, noggin, a BMP4 inhibitor, increases numbers of regenerated hair cells. These findings demonstrate that BMP4 antagonizes hair cell regeneration in the chicken basilar papilla, at least in part by preventing accumulation of ATOH1 in hair cell precursors.



from #Audiology via ola Kala on Inoreader https://ift.tt/2I7Nptu
via IFTTT

Speech Understanding in Noise for Adults With Cochlear Implants: Effects of Hearing Configuration, Source Location Certainty, and Head Movement

Purpose
The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow.
Method
Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed.
Results
(a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear).
Conclusions
Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KRf5AH
via IFTTT

Grammatical Word Production Across Metrical Contexts in School-Aged Children's and Adults' Speech

Purpose
The purpose of this study is to test whether age-related differences in grammatical word production are due to differences in how children and adults chunk speech for output or to immature articulatory timing control in children.
Method
Two groups of 12 children, 5 and 8 years old, and 1 group of 12 adults produced sentences with phrase-medial determiners. Preceding verbs were varied to create different metrical contexts for chunking the determiner with an adjacent content word. Following noun onsets were varied to assess the coherence of determiner–noun sequences. Determiner vowel duration, amplitude, and formant frequencies were measured.
Results
Children produced significantly longer and louder determiners than adults regardless of metrical context. The effect of noun onset on F1 was stronger in children's speech than in adults' speech; the effect of noun onset on F2 was stronger in adults' speech than in children's. Effects of metrical context on anticipatory formant patterns were more evident in children's speech than in adults' speech.
Conclusion
The results suggest that both immature articulatory timing control and age-related differences in how chunks are accessed or planned influence grammatical word production in school-aged children's speech. Future work will focus on the development of long-distance coarticulation to reveal the evolution of speech plan structure over time.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rAqBr5
via IFTTT

Sound-localization performance of patients with single-sided deafness is not improved when listening with a bone-conduction device

Publication date: Available online 19 April 2018
Source:Hearing Research
Author(s): Martijn J.H. Agterberg, Ad F.M. Snik, Rens M.G. Van de Goor, Myrthe K.S. Hol, A. John Van Opstal
An increased number of treatment options has become available for patients with single sided deafness (SSD), who are seeking hearing rehabilitation. For example, bone-conduction devices that employ contralateral routing of sound (CROS), by transmitting acoustic bone vibrations from the deaf side to the cochlea of the hearing ear, are widely used. However, in some countries, cochlear implantation is becoming the standard treatment. The present study investigated whether CROS intervention, by means of a CROS bone-conduction device (C-BCD), affected sound-localization performance of patients with SSD. Several studies have reported unexpected moderate to good unilateral sound-localization abilities in unaided SSD listeners. Listening with a C-BCD might deteriorate these localization abilities because sounds are transmitted, through bone conduction to the contralateral normal hearing ear, and could thus interfere with monaural level cues (i.e. ambiguous monaural head-shadow cues), or with the subtle spectral localization cues, on which the listener has learned to rely on. The present study included nineteen SSD patients who were using their C-BCD for more than five months. To assess the use of the different localization cues, we investigated their localization abilities to broadband (BB, 0.5–20 kHz), low-pass (LP, 0.5–1.5 kHz), and high-pass filtered noises (HP, 3–20 kHz) of varying intensities. Experiments were performed in complete darkness, by measuring orienting head-movement responses under open-loop localization conditions. We demonstrate that a minority of listeners with SSD (5 out of 19) could localize BB and HP (but not LP) sounds in the horizontal plane in the unaided condition, and that a C-BCD did not deteriorate their localization abilities.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jOocWw
via IFTTT

Editorial Board

alertIcon.gif

Publication date: June 2018
Source:Hearing Research, Volume 363





from #Audiology via ola Kala on Inoreader https://ift.tt/2Ic8leU
via IFTTT

Spatial hearing ability of the pigmented guinea pig (Cavia porcellus): minimum audible angle and spatial release from masking in azimuth

S03785955.gif

Publication date: Available online 27 April 2018
Source:Hearing Research
Author(s): Nathanial T. Greene, Kelsey L. Anbuhl, Alexander T. Ferber, Marisa DeGuzman, Paul D. Allen, Daniel J. Tollin
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the “prepulse”) along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker “swap” paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed a reduction in startle amplitude (i.e., greater PPI) when the masker was presented at speaker locations near the chirp signal. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements.



from #Audiology via ola Kala on Inoreader https://ift.tt/2jNxvpt
via IFTTT

Towards an objective test of chronic tinnitus: Properties of auditory cortical potentials evoked by silent gaps in tinnitus-like sounds

elsevier-non-solus.png

Publication date: Available online 17 April 2018
Source:Hearing Research
Author(s): Brandon T. Paul, Marc Schoenwiesner, Sylvie Hébert
A common method designed to identify if an animal hears tinnitus assumes that tinnitus “fills-in” silent gaps in background sound. This phenomenon has not been reliably demonstrated in humans. One test of the gap-filling hypothesis would be to determine if gap-evoked cortical potentials are absent or attenuated when measured within background sound matched to the tinnitus sensation. However the tinnitus sensation is usually of low intensity and of high frequency, and it is unknown if cortical responses can be measured with such “weak” stimulus properties. Therefore the aim of the present study was to test the plausibility of observing these responses in the EEG in humans without tinnitus. Twelve non-tinnitus participants heard narrowband noises centered at sound frequencies of 5 or 10 kHz at sensation levels of either 5, 15, or 30 dB. Silent gaps of 20 ms duration were randomly inserted into noise stimuli, and cortical potentials evoked by these gaps were measured by 64-channel EEG. Gap-evoked cortical responses were statistically identifiable in all conditions for all but one participant. Responses were not significantly different between noise frequencies or levels. Results suggest that cortical responses can be measured when evoked by gaps in sounds that mirror acoustic properties of tinnitus. This design can validate the animal model and be used as a tinnitus diagnosis test in humans.



from #Audiology via ola Kala on Inoreader https://ift.tt/2IelioI
via IFTTT

Why Does Language Not Emerge Until the Second Year?

S03785955.gif

Publication date: Available online 9 May 2018
Source:Hearing Research
Author(s): Rhodri Cusack, Conor J. Wild, Leire Zubiaurre-Elorza, Annika C. Linke
From their second year, infants typically begin to show rapid acquisition of receptive and expressive language. Here, we ask why these language skills do not begin to develop earlier. One evolutionary hypothesis is that infants are born when many brains systems are immature and not yet functioning, including those critical to language, because human infants have large have a large head and their mother's pelvis size is limited, necessitating an early birth. An alternative proposal, inspired by discoveries in machine learning, is that the language systems are mature enough to function but need auditory experience to develop effective representations of speech, before the language functions that manifest in behaviour can emerge. Growing evidence, in particular from neuroimaging, is supporting this latter hypothesis. We have previously shown with magnetic resonance imaging (MRI) that the acoustic radiation, carrying rich information to auditory cortex, is largely mature by 1 month, and using functional MRI (fMRI) that auditory cortex is processing many complex features of natural sounds by 3 months. However, speech perception relies upon a network of regions beyond auditory cortex, and it is not established if this network is mature. Here we measure the maturity of the speech network using functional connectivity with fMRI in infants at 3 months (N=6) and 9 months (N=7), and in an adult comparison group (N=15). We find that functional connectivity in speech networks is mature at 3 months, suggesting that the delay in the onset of language is not due to brain immaturity but rather to the time needed to develop representations through experience. Future avenues for the study of language development are proposed, and the implications for clinical care and infant education are discussed.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ielfcw
via IFTTT

Editorial Board

elsevier-non-solus.png

Publication date: May 2018
Source:Hearing Research, Volume 362





from #Audiology via ola Kala on Inoreader https://ift.tt/2I9KRa9
via IFTTT

Speech Understanding in Noise for Adults With Cochlear Implants: Effects of Hearing Configuration, Source Location Certainty, and Head Movement

Purpose
The primary purpose of this study was to assess speech understanding in quiet and in diffuse noise for adult cochlear implant (CI) recipients utilizing bimodal hearing or bilateral CIs. Our primary hypothesis was that bilateral CI recipients would demonstrate less effect of source azimuth in the bilateral CI condition due to symmetric interaural head shadow.
Method
Sentence recognition was assessed for adult bilateral (n = 25) CI users and bimodal listeners (n = 12) in three conditions: (1) source location certainty regarding fixed target azimuth, (2) source location uncertainty regarding roving target azimuth, and (3) Condition 2 repeated, allowing listeners to turn their heads, as needed.
Results
(a) Bilateral CI users exhibited relatively similar performance regardless of source azimuth in the bilateral CI condition; (b) bimodal listeners exhibited higher performance for speech directed to the better hearing ear even in the bimodal condition; (c) the unilateral, better ear condition yielded higher performance for speech presented to the better ear versus speech to the front or to the poorer ear; (d) source location certainty did not affect speech understanding performance; and (e) head turns did not improve performance. The results confirmed our hypothesis that bilateral CI users exhibited less effect of source azimuth than bimodal listeners. That is, they exhibited similar performance for speech recognition irrespective of source azimuth, whereas bimodal listeners exhibited significantly poorer performance with speech originating from the poorer hearing ear (typically the nonimplanted ear).
Conclusions
Bilateral CI users overcame ear and source location effects observed for the bimodal listeners. Bilateral CI users have access to head shadow on both sides, whereas bimodal listeners generally have interaural asymmetry in both speech understanding and audible bandwidth limiting the head shadow benefit obtained from the poorer ear (generally the nonimplanted ear). In summary, we found that, in conditions with source location uncertainty and increased ecological validity, bilateral CI performance was superior to bimodal listening.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2KRf5AH
via IFTTT

Grammatical Word Production Across Metrical Contexts in School-Aged Children's and Adults' Speech

Purpose
The purpose of this study is to test whether age-related differences in grammatical word production are due to differences in how children and adults chunk speech for output or to immature articulatory timing control in children.
Method
Two groups of 12 children, 5 and 8 years old, and 1 group of 12 adults produced sentences with phrase-medial determiners. Preceding verbs were varied to create different metrical contexts for chunking the determiner with an adjacent content word. Following noun onsets were varied to assess the coherence of determiner–noun sequences. Determiner vowel duration, amplitude, and formant frequencies were measured.
Results
Children produced significantly longer and louder determiners than adults regardless of metrical context. The effect of noun onset on F1 was stronger in children's speech than in adults' speech; the effect of noun onset on F2 was stronger in adults' speech than in children's. Effects of metrical context on anticipatory formant patterns were more evident in children's speech than in adults' speech.
Conclusion
The results suggest that both immature articulatory timing control and age-related differences in how chunks are accessed or planned influence grammatical word production in school-aged children's speech. Future work will focus on the development of long-distance coarticulation to reveal the evolution of speech plan structure over time.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rAqBr5
via IFTTT