Πέμπτη 27 Απριλίου 2017

Electrocochleography in Cochlear Implant Recipients With Residual Hearing: Comparison With Audiometric Thresholds

imageObjectives: To determine whether electrocochleography (ECoG) thresholds, especially cochlear microphonic and auditory nerve neurophonic thresholds, measured using an intracochlear electrode, can be used to predict pure-tone audiometric thresholds following cochlear implantation in ears with residual hearing. Design: Pure-tone audiometric thresholds and ECoG waveforms were measured at test frequencies from 125 to 4000 Hz in 21 Advanced Bionics cochlear implant recipients with residual hearing in the implanted ear. The “difference” and “summation” responses were computed from the ECoG waveforms measured from two alternating phases of stimulation. The interpretation is that difference responses are largely from the cochlear microphonic while summating responses are largely from the auditory nerve neurophonic. The pure-tone audiometric thresholds were also measured with same equipment used for ECoG measurements. Results: Difference responses were observed in all 21 implanted ears, whereas summation response waveforms were observed in only 18 ears. The ECoG thresholds strongly correlated (r2 = 0.87, n = 150 for difference response; r2 = 0.82, n = 72 for summation response) with audiometric thresholds. The mean difference between the difference response and audiometric thresholds was −3.2 (±9.0) dB, while the mean difference between summation response and audiometric thresholds was −14 (±11) dB. In four out of 37 measurements, difference responses were measured to frequencies where no behavioral thresholds were present. Conclusions: ECoG thresholds may provide a useful metric for the assessment of residual hearing in cochlear implant subjects for whom it is not possible to perform behavioral audiometric testing.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdPz0R
via IFTTT

The Effect of Interaural Mismatches on Contralateral Unmasking With Single-Sided Vocoders

imageObjectives: Cochlear-implant (CI) users with single-sided deafness (SSD)—that is, one normal-hearing (NH) ear and one CI ear—can obtain some unmasking benefits when a mixture of target and masking voices is presented to the NH ear and a copy of just the masking voices is presented to the CI ear. NH listeners show similar benefits in a simulation of SSD-CI listening, whereby a mixture of target and masking voices is presented to one ear and a vocoded copy of the masking voices is presented to the opposite ear. However, the magnitude of the benefit for SSD-CI listeners is highly variable across individuals and is on average less than for NH listeners presented with vocoded stimuli. One possible explanation for the limited benefit observed for some SSD-CI users is that temporal and spectral discrepancies between the acoustic and electric ears might interfere with contralateral unmasking. The present study presented vocoder simulations to NH participants to examine the effects of interaural temporal and spectral mismatches on contralateral unmasking. Design: Speech-reception performance was measured in a competing-talker paradigm for NH listeners presented with vocoder simulations of SSD-CI listening. In the monaural condition, listeners identified target speech masked by two same-gender interferers, presented to the left ear. In the bilateral condition, the same stimuli were presented to the left ear, but the right ear was presented with a noise-vocoded copy of the interfering voices. This paradigm tested whether listeners could integrate the interfering voices across the ears to better hear the monaural target. Three common distortions inherent in CI processing were introduced to the vocoder processing: spectral shifts, temporal delays, and reduced frequency selectivity. Results: In experiment 1, contralateral unmasking (i.e., the benefit from adding the vocoded maskers to the second ear) was impaired by spectral mismatches of four equivalent rectangular bandwidths or greater. This is equivalent to roughly a 3.6-mm mismatch between the cochlear places stimulated in the electric and acoustic ears, which is on the low end of the average expected mismatch for SSD-CI listeners. In experiment 2, performance was negatively affected by a temporal mismatch of 24 ms or greater, but not for mismatches in the 0 to 12 ms range expected for SSD-CI listeners. Experiment 3 showed an interaction between spectral shift and spectral resolution, with less effect of interaural spectral mismatches when the number of vocoder channels was reduced. Experiment 4 applied interaural spectral and temporal mismatches in combination. Performance was best when both frequency and timing were aligned, but in cases where a mismatch was present in one dimension (either frequency or latency), the addition of mismatch in the second dimension did not further disrupt performance. Conclusions: These results emphasize the need for interaural alignment—in timing and especially in frequency—to maximize contralateral unmasking for NH listeners presented with vocoder simulations of SSD-CI listening. Improved processing strategies that reduce mismatch between the electric and acoustic ears of SSD-CI listeners might improve their ability to obtain binaural benefits in multitalker environments.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdxUqj
via IFTTT

Long-Term Synergistic Interaction of Cisplatin- and Noise-Induced Hearing Losses

imageObjective: Past experiments in the literature have shown that cisplatin interacts synergistically with noise to create hearing loss. Much of the previous work on the synergistic interaction of noise and cisplatin tested exposures that occurred very close together in time. The present study assessed whether rats that have been exposed to cisplatin continue to show increased susceptibility to noise-induced hearing loss months after conclusion of the cisplatin exposure. Design: Thirty-two Fischer 344/NHsd rats were exposed to one of five conditions: (1) cisplatin exposure followed by immediate cochlear tissue harvest, (2) cisplatin exposure and a 20-week monitoring period before tissue harvest, (3) cisplatin exposure followed immediately by noise exposure, (4) cisplatin exposure followed by noise exposure 16 weeks later, and (5) noise exposure without cisplatin exposure. The cisplatin exposure was an 8-week interval in which cisplatin was given every 2 weeks. Cochlear injury was evaluated using auditory brainstem response thresholds, P1 wave amplitudes, and postmortem outer hair cell counts. Results: The 8-week cisplatin exposure induced little threshold shift or P1 amplitude loss, and a small lesion of missing outer hair cells in the basal half of the cochlea. The rats exposed to noise immediately after the cisplatin exposure interval showed a synergistic interaction of cisplatin and noise. The group exposed to noise 16 weeks after the cisplatin exposure interval also showed more severe threshold shift and outer hair cell loss than control subjects. The controls exposed to cisplatin and monitored for 20 weeks showed little threshold shift or outer hair cell loss, but did show P1 wave amplitude changes over the 20-week monitoring period. Conclusions: The results from the groups exposed to cisplatin followed by noise, combined with the findings from the cisplatin- and noise-only groups, suggest that the cisplatin induced cochlear injuries that were not severe enough to result in threshold shift, but left the cochlea in a state of heightened susceptibility to future injury. The heightened susceptibility to noise injury was still present 16 weeks after the conclusion of the cisplatin exposure.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzGri
via IFTTT

Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review

imageObjectives: To undertake a systematic review of available evidence on the effect of hearing impairment and hearing aid amplification on listening effort. Two research questions were addressed: Q1) does hearing impairment affect listening effort? and Q2) can hearing aid amplification affect listening effort during speech comprehension? Design: English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, and PsycINFO from inception to August 2014. References of eligible studies were checked. The Population, Intervention, Control, Outcomes, and Study design strategy was used to create inclusion criteria for relevance. It was not feasible to apply a meta-analysis of the results from comparable studies. For the articles identified as relevant, a quality rating, based on the 2011 Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines, was carried out to judge the reliability and confidence of the estimated effects. Results: The primary search produced 7017 unique hits using the keywords: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Of these, 41 articles fulfilled the Population, Intervention, Control, Outcomes, and Study design selection criteria of: experimental work on hearing impairment OR hearing aid technologies AND listening effort OR fatigue during speech perception. The methods applied in those articles were categorized into subjective, behavioral, and physiological assessment of listening effort. For each study, the statistical analysis addressing research question Q1 and/or Q2 was extracted. In seven articles more than one measure of listening effort was provided. Evidence relating to Q1 was provided by 21 articles that reported 41 relevant findings. Evidence relating to Q2 was provided by 27 articles that reported 56 relevant findings. The quality of evidence on both research questions (Q1 and Q2) was very low, according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. Conclusion: In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzTuB
via IFTTT

Prevalence, Incidence Proportion, and Heritability for Tinnitus: A Longitudinal Twin Study

imageObjectives: The purpose of this longitudinal twin study was to explore the effect of tinnitus on hearing thresholds and threshold shifts over two decades and to investigate the genetic contribution to tinnitus in a male twin cohort (n = 1114 at baseline and 583 at follow-up). The hypothesis was that participants with faster hearing deterioration had a higher risk for developing tinnitus and there is an underlying role of genetic influences on tinnitus. Design: Male mono- and dizygotic twin pairs, born between 1914 and 1958 were included. Mixed models were used for comparison of hearing threshold shifts, adjusted for age. A co-twin comparison was made within pairs discordant for tinnitus. The relative influence of genetic and environmental factors was estimated by genetic modeling. Results: The overall prevalence of tinnitus was 13.5% at baseline ( age 50) and 34.4% at follow-up ( age 67). The overall incidence proportion was 27.8%. Participants who reported tinnitus at baseline or at both time points were older. At baseline, the hearing thresholds differed between tinnitus cases and controls at all frequencies. New tinnitus cases at follow-up had the greatest hearing threshold shift at the high-frequency area compared with the control group. Within pairs, the tinnitus twin had poorer hearing than his unaffected co-twin, more so for dizygotic than monozygotic twin pairs. The relative proportion of additive genetic factors was approximately 0.40 at both time points, and the influence of individual-specific environment was 0.56 to 0.61. The influence of genetic factors on tinnitus was largely independent of genetic factors for hearing thresholds. Conclusions: Our hypotheses were confirmed: The fastest hearing deterioration occurred for new tinnitus cases. A moderate genetic influence for tinnitus was confirmed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzGaM
via IFTTT

Response to Letter to the Editor: RE: Henry, J.A., Frederick, M., Sell, S, Griest, S., Abrams, H. (2015) Validation of a Novel Combination Hearing Aid and Tinnitus Therapy Device, Ear Hear, 36(1): 42–52

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzFDK
via IFTTT

Sequential Bilateral Cochlear Implantation in Children: Outcome of the Second Implant and Long-Term Use

imageObjectives: The aim of this retrospective cohort study was to assess speech perception outcomes of second-side cochlear implants (CI2) relative to first-side implants (CI1) in 160 participants who received their CI1 as a child. The predictive factors of CI2 speech perception outcomes were investigated. In addition, CI2 device use predictive models were assessed using the categorical variable of participant’s decision to use CI2 for a minimum of 5 years after surgery. Findings from a prospective study that evaluated the bilateral benefit for speech recognition in noise in a participant subgroup (n = 29) are also presented. Design: Participants received CI2 between 2003 and 2009 (and CI1 between 1988 and 2008), and were observed from surgery to a minimum of 5 years after sequential surgery. Group A (n = 110) comprised prelingually deaf children (severe to profound) with no or little acquired oral language before implantation, while group B (n = 50) comprised prelingually deaf children with acquired language before implantation, in addition to perilingually and postlingually deaf children. Speech perception outcomes included the monosyllable test score or the closed-set Early Speech Perception test score if the monosyllable test was too difficult. To evaluate bilateral benefit for speech recognition in noise, participants were tested with the Hearing in Noise test in bilateral and “best CI” test conditions with noise from the front and noise from either side. Bilateral advantage was calculated by subtracting the Hearing in Noise test speech reception thresholds in noise obtained in the bilateral listening mode from those obtained in the unilateral “best CI” mode. Results: On average, CI1 speech perception was 28% better than CI2 performance in group A, the same difference was 20% in group B. A small bilateral speech perception benefit of using CI2 was measured, 3% in group A and 7% in group B. Longer interimplant interval predicted poorer CI2 speech perception in group A, but only for those who did not use a hearing aid in the interimplant interval in group B. At least 5 years after surgery, 25% of group A and 10% of group B did not use CI2. In group A, prediction factors for nonuse of CI2 were longer interimplant intervals or CI2 age. Large difference in speech perception between the two sides was a predictor for CI2 nonuse in both groups. Bilateral advantage for speech recognition in noise was mainly obtained for the condition with noise near the “best CI”; the addition of a second CI offered a new head shadow benefit. A small mean disadvantage was measured when the noise was located opposite to the “best CI.” However, the latter was not significant. Conclusions: Generally, in both groups, if CI2 did not become comparable with CI1, participants were more likely to choose not to use CI2 after some time. In group A, increased interimplant intervals predicted poorer CI2 speech perception results and increased the risk of not using CI2 at a later date. Bilateral benefit was mainly obtained when noise was opposite to CI2, introducing a new head shadow benefit.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzEzG
via IFTTT

Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

imageObjectives: The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design: Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results: Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions: The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzKaw
via IFTTT

Dichotic Digits Test Performance Across the Ages: Results From Two Large Epidemiologic Cohort Studies

imageObjectives: The Dichotic Digits test (DDT) has been widely used to assess central auditory processing but there is limited information on observed DDT performance in a general population. The purpose of the study was to determine factors related to DDT performance in a large cohort spanning the adult age range. Design: The study was cross-sectional and subjects were participants in the Epidemiology of Hearing Loss Study (EHLS), a population-based investigation of age-related hearing loss, or the Beaver Dam Offspring Study (BOSS), a study of aging in the adult offspring of the EHLS members. Subjects seen during the 4th EHLS (2008 to 2010) or the 2nd BOSS (2010 to 2013) examination were included (N = 3655 participants [1391 EHLS, 2264 BOSS]; mean age = 61.1 years, range = 21 to 100 years). The free and right ear-directed recall DDTs were administered using 25 sets of triple-digit pairs with a 70 dB HL presentation level. Pure-tone audiometric testing was conducted and the pure-tone threshold average (PTA) at 0.5, 1, 2, and 4 kHz was categorized using the worse ear: no loss = PTA ≤ 25 dB HL; mild loss = 25 40 dB HL. Cognitive impairment was defined as a Mini-Mental State Examination score

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdkF9a
via IFTTT

Directional Microphone Contralateral Routing of Signals in Cochlear Implant Users: A Within-Subjects Comparison

imageObjectives: For medical or financial reasons, bilateral cochlear implantation is not always possible in bilaterally deafened patients. In such cases, a contralateral routing of signals (CROS) device could complement the monaural implant. The goal of our study was to compare the benefit of three different conditions: (1) unilateral cochlear implant (CI) alone, (2) unilateral CI complemented with a directional CROS microphone, and (3) bilateral CIs. Design: Twelve bilateral experienced CI users were tested. Speech reception in noise and sound localization were measured in the three above-mentioned conditions. Patients evaluated which condition they presumed to be activated and the subjective benefit on a hearing scale. Results: Compared with the unilateral CI condition, the additional CROS device provided significantly better speech intelligibility in noise when speech signals came from the front or side of the CROS microphone. Only small subjective improvement was observed. Bilateral-activated CIs further improved the hearing performance. This was the only condition where sound localization was possible. Subjective evaluation showed a clear preference for the bilateral CI treatment. Conclusions: In bilateral deafened patients, bilateral implantation is the most preferable form of treatment. However, patients with one implant only could benefit from an additional directional microphone CROS device.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdJe5J
via IFTTT

Children With Cochlear Implants and Their Parents: Relations Between Parenting Style and Children’s Social-Emotional Functioning

imageObjectives: Parenting a child who has a severe or profound hearing loss can be challenging and at times stressful, and might cause parents to use more adverse parenting styles compared with parents of hearing children. Parenting styles are known to impact children’s social-emotional development. Children with a severe to profound hearing loss may be more reliant on their parents in terms of their social-emotional development when compared with their hearing peers who typically have greater opportunities to interact with and learn from others outside their family environment. Identifying the impact which parenting styles pertain on the social-emotional development of children who have cochlear implants (CIs) could help advance these children’s well-being. Therefore, the authors compared parenting styles of parents with hearing children and of parents with children who have a CI, and examined the relations between parenting styles and two key aspects of children’s social-emotional functioning: emotion regulation and empathy. Design: Ninety-two hearing parents and their children (aged 1 to 5 years old), who were either hearing (n = 46) or had a CI (n = 46), participated in this cross-sectional study. Parents completed questionnaires concerning their parenting styles (i.e., positive, negative and uninvolved), and regarding the extent to which their children expressed negative emotions (i.e., anger and sadness) and empathy. Furthermore, an emotion-regulation task measuring negative emotionality was administered to the children. Results: No differences in reported parenting styles were observed between parents of hearing children and parents of children with a CI. In addition, negative and uninvolved parenting styles were related to higher levels of negative emotionality in both groups of children. No relation was found between positive parenting and children’s social-emotional functioning. Hearing status did not moderate these relationships. Language mediated the relationship between parenting styles and children’s social-emotional functioning. Conclusions: Children’s hearing status did not impact parenting styles. This may be a result of the support that parents of children with a CI receive during their enrollment in the rehabilitation program preceding and after implantation. Rehabilitation programs should dedicate more attention to informing parents about the impact of parenting behaviors on children’s social-emotional functioning. Offering parenting courses as part of the program could promote children’s well-being. Future longitudinal research should address the directionality of the relations between parenting styles and children’s social-emotional functioning.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdkDhy
via IFTTT

Letter to the Editor: Reporting of Data to Inform the Design of a Definitive Trial Re: Henry, J.A., Frederick, M., Sell, S., Griest, S., Abrams, H. (2015). Validation of a Novel Combination Hearing Aid and Tinnitus Therapy Device, Ear Hear, 36(1): 42–52

No abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnBgEa
via IFTTT

Effects of Stimulus Polarity and Artifact Reduction Method on the Electrically Evoked Compound Action Potential

imageObjective: Previous research from our laboratory comparing electrically evoked compound action potential (ECAP) artifact reduction methods has shown larger amplitudes and lower thresholds with cathodic-leading forward masking (CathFM) than with alternating polarity (AltPol). One interpretation of this result is that the anodic-leading phase used with AltPol elicits a less excitatory response (in contrast to results from recent studies with humans), which when averaged with responses to cathodic-leading stimuli, results in smaller amplitudes. Another interpretation is that the latencies of the responses to anodic- and cathodic-leading pulses differ, which when averaged together, result in smaller amplitudes than for either polarity alone due to temporal smearing. The purpose of this study was to separate the effects of stimulus polarity and artifact reduction method to determine the relative effects of each. Design: This study used a within-subjects design. ECAP growth functions were obtained using CathFM, anodic-leading forward masking (AnodFM), and AltPol for 23 CI recipients (N = 13 Cochlear and N = 10 Advanced Bionics). N1 latency, amplitude, slope of the amplitude growth function, and threshold were compared across methods. Data were analyzed separately for each manufacturer due to inherent differences between devices. Results: N1 latencies were significantly shorter for AnodFM than for CathFM and AltPol for both Cochlear and Advanced Bionics participants. Amplitudes were larger for AnodFM than for either CathFM or AltPol for Cochlear recipients; amplitude was not significantly different across methods for Advanced Bionics recipients. Slopes were shallowest for CathFM for Cochlear subjects, but were not significantly different among methods for Advanced Bionics subjects. Thresholds with AltPol were significantly higher than both FM methods for Cochlear recipients; there was no difference in threshold across methods for the Advanced Bionics recipients. Conclusions: For Cochlear devices, the smaller amplitudes and higher thresholds observed for AltPol seem to be the result of latency differences between polarities. These results suggest that AltPol is not ideal for managing stimulus artifact for ECAP recordings. For the Advanced Bionics group, there were no significant differences among methods for amplitude, slope, or threshold, which suggests that polarity and artifact reduction method have little influence in these devices. We postulate that polarity effects are minimized for symmetrical biphasic pulses that lack an interphase gap, such as those used with Advanced Bionics devices; however, this requires further investigation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnY6M4
via IFTTT

Normative Wideband Reflectance, Equivalent Admittance at the Tympanic Membrane, and Acoustic Stapedius Reflex Threshold in Adults

imageObjectives: Wideband acoustic immittance (WAI) measures such as pressure reflectance, parameterized by absorbance and group delay, equivalent admittance at the tympanic membrane (TM), and acoustic stapedius reflex threshold (ASRT) describe middle ear function across a wide frequency range, compared with traditional tests employing a single frequency. The objective of this study was to obtain normative data using these tests for a group of normal-hearing adults and investigate test–retest reliability using a longitudinal design. Design: A longitudinal prospective design was used to obtain normative test and retest data on clinical and WAI measures. Subjects were 13 males and 20 females (mean age = 26 years). Inclusion criteria included normal audiometry and clinical immittance. Subjects were tested on two separate visits approximately 1 month apart. Reflectance and equivalent admittance at the TM were measured from 0.25 to 8.0 kHz under three conditions: at ambient pressure in the ear canal and with pressure sweeps from positive to negative pressure (downswept) and negative to positive pressure (upswept). Equivalent admittance at the TM was calculated using admittance measurements at the probe tip that were adjusted using a model of sound transmission in the ear canal and acoustic estimates of ear-canal area and length. Wideband ASRTs were measured at tympanometric peak pressure (TPP) derived from the average TPP of downswept and upswept tympanograms. Descriptive statistics were obtained for all WAI responses, and wideband and clinical ASRTs were compared. Results: Mean absorbance at ambient pressure and TPP demonstrated a broad band-pass pattern typical of previous studies. Test–retest differences were lower for absorbance at TPP for the downswept method compared with ambient pressure at frequencies between 1.0 and 1.26 kHz. Mean tympanometric peak-to-tail differences for absorbance were greatest around 1.0 to 2.0 kHz and similar for positive and negative tails. Mean group delay at ambient pressure and at TPP were greatest between 0.32 and 0.6 kHz at 200 to 300 μsec, reduced at frequencies between 0.8 and 1.5 kHz, and increased above 1.5 kHz to around 150 μsec. Mean equivalent admittance at the TM had a lower level for the ambient method than at TPP for both sweep directions below 1.2 kHz, but the difference between methods was only statistically significant for the comparison between the ambient method and TPP for the upswept tympanogram. Mean equivalent admittance phase was positive at all frequencies. Test–retest reliability of the equivalent admittance level ranged from 1 to 3 dB at frequencies below 1.0 kHz, but increased to 8 to 9 dB at higher frequencies. The mean wideband ASRT for an ipsilateral broadband noise activator was 12 dB lower than the clinical ASRT, but had poorer reliability. Conclusions: Normative data for the WAI test battery revealed minor differences for results at ambient pressure compared with tympanometric methods at TPP for reflectance, group delay, and equivalent admittance level at the TM for subjects with middle ear pressure within ±100 daPa. Test–retest reliability was better for absorbance at TPP for the downswept tympanogram compared with ambient pressure at frequencies around 1.0 kHz. Large peak-to-tail differences in absorbance combined with good reliability at frequencies between about 0.7 and 3.0 kHz suggest that this may be a sensitive frequency range for interpreting absorbance at TPP. The mean wideband ipsilateral ASRT was lower than the clinical ASRT, consistent with previous studies. Results are promising for the use of a wideband test battery to evaluate middle ear function.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnBgnE
via IFTTT

Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of Eisenberg et al. (2002)

imageObjectives: Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Design: Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Results: Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions: First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnt0Eo
via IFTTT

Three-Dimensional Force Profile During Cochlear Implantation Depends on Individual Geometry and Insertion Trauma

imageObjectives: To preserve the acoustic hearing, cochlear implantation has to be as atraumatic as possible. Therefore, understanding the impact of the cochlear geometry on insertion forces and intracochlear trauma might help to adapt and improve the electrode insertion and reduce the probability of intracochlear trauma. Design: The study was conducted on 10 fresh-frozen human temporal bones. The inner ear was removed from the temporal bone. The bony capsule covering the scala vestibuli was removed and the dissected inner ear was mounted on the three-dimensional (3D) force measurement system (Agilent technologies, Nano UTM, Santa Clare, CA). A lateral wall electrode array was inserted, and the forces were recorded in three dimensions with a sensitivity of 2 μN. Afterwards, the bones were scanned using a Skyscan 1173 micro-computed tomography (micro-CT). The obtained 3D force profiles were correlated with the videos of the insertions recorded through the microscope, and the micro-CT images. Results: A correlation was found between intracochlear force profiles measured in three different directions with intracochlear trauma detected with micro-CT imaging. The angle of insertion and the cochlear geometry had a significant impact on the electrode array insertion forces and possible insertion trauma. Intracochlear trauma occurred frequently within the first 180° from the round window, where buckling of the proximal part of the electrode carrier inside the cochlea, and rupturing of the spiral ligament was observed. Conclusions: The combination of the 3D force measurement system and micro-CT can be used to characterize the mechanical behavior of a CI electrode array and some forms of insertion trauma. Intracochlear trauma does not always correlate with higher force amplitudes, but rather with an abrupt change of force directions.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnH4xv
via IFTTT

The Change in Electrical Stimulation Levels During 24 Months Postimplantation for a Large Cohort of Adults Using the Nucleus® Cochlear Implant

imageObjectives: To examine electrical stimulation data over 24 months postimplantation in adult implant users. The first aim was to calculate mean T and C levels for seven time points, for four cochlear segments, and two array types. The second aim was to (a) analyze the degree of change in each of the T and C levels as a function of dynamic range for six consecutive time point comparisons, for the four segments, and (b) to determine the proportion of participants with an acceptable degree of change. The third aim was to examine relationships between demographic factors and degree of change. Design: T levels, C levels, and dynamic ranges were extracted for 680 adults using Nucleus implants for the following postimplant time points: 2-, 3-, 6-, 9-, 12-, 18-, and 24-month. For each time point, mean levels were calculated for the four segments. The degree of change in each of the levels was analyzed for six consecutive time point comparisons. The criterion for an acceptable degree of change was ≤20% of DR. Results: Mean T level was significantly lower for the 2-month time point compared with all time points after the 3-month time point. Mean C level was significantly lower for the 2- and 3-month time points compared with all other time points. Mean T level was significantly lower for the apical compared with all other segments and for the lower-basal compared with the upper-basal segment. Mean C level was significantly different across all four segments. Mean C level for the basal segments was 4 CLs higher for the perimodiolar array compared with the straight array. No significant differences were evident for the mean degree of change between consecutive time point comparisons. For all segments, approximately 65 to 75% of the participants showed an average acceptable degree of change in levels from the 3- to 6-month comparison. The mean degree of change in T levels was significantly greater for the basal segments compared with all other segments. The mean degree of change in levels was significantly greater for the otosclerosis group compared with all other groups, and for the prelingual onset of deafness group compared with the postlingual group. Conclusion: Given the very large cohort, this study provides evidence for the mean levels and the degree of change in these levels that should be expected for four segments in the first 24 months postimplantation for adults using Nucleus implants. The mean T and C levels were consistent after the 3- and 6-month time points postimplant, respectively. The degree of change was variable between individuals. For each segment, however, a large percentage of participants showed an average change of ≤20% in each of the T and C levels from the 3- to 6-month comparison. Given the large degree of change in levels for some groups, the results provide strong evidence in favor of frequent monitoring of levels in the first 24 months postimplantation for patients with otosclerosis, prelingual onset of deafness, and those who exhibit >20% change in levels after 3 months postimplantation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnJoEM
via IFTTT

Cold Thermal Irrigation Decreases the Ipsilateral Gain of the Vestibulo-Ocular Reflex

imageObjectives: During head rotations, neuronal firing rates increase in ipsilateral and decrease in contralateral vestibular afferents. At low accelerations, this “push-pull mechanism” is linear. At high accelerations, however, the change of firing rates is nonlinear in that the ipsilateral increase of firing rate is larger than the contralateral decrease. This mechanism of stronger ipsilateral excitation than contralateral inhibition during high-acceleration head rotation, known as Ewald’s second law, is implemented within the nonlinear pathways. The authors asked whether caloric stimulation could provide an acceleration signal high enough to influence the contribution of the nonlinear pathway to the rotational vestibulo-ocular reflex gain (rVOR gain) during head impulses. Design: Caloric warm (44°C) and cold (24, 27, and 30°C) water irrigations of the left ear were performed in 7 healthy human subjects with the lateral semicircular canals oriented approximately earth-vertical (head inclined 30° from supine) and earth-horizontal (head inclined 30° from upright). Results: With the lateral semicircular canal oriented earth-vertical, the strongest cold caloric stimulus (24°C) significantly decreased the rVOR gain during ipsilateral head impulses, while all other irrigations, irrespective of head position, had no significant effect on rVOR gains during head impulses to either side. Conclusions: Strong caloric irrigation, which can only be achieved with cold water, reduces the rVOR gain during ipsilateral head impulses and thus demonstrates Ewald’s second law in healthy subjects. This unilateral gain reduction suggests that cold-water caloric irritation shifts the set point of the nonlinear relation between head acceleration and the vestibular firing rate toward a less acceleration-sensitive zone.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnOIaY
via IFTTT

Electrocochleography in Cochlear Implant Recipients With Residual Hearing: Comparison With Audiometric Thresholds

imageObjectives: To determine whether electrocochleography (ECoG) thresholds, especially cochlear microphonic and auditory nerve neurophonic thresholds, measured using an intracochlear electrode, can be used to predict pure-tone audiometric thresholds following cochlear implantation in ears with residual hearing. Design: Pure-tone audiometric thresholds and ECoG waveforms were measured at test frequencies from 125 to 4000 Hz in 21 Advanced Bionics cochlear implant recipients with residual hearing in the implanted ear. The “difference” and “summation” responses were computed from the ECoG waveforms measured from two alternating phases of stimulation. The interpretation is that difference responses are largely from the cochlear microphonic while summating responses are largely from the auditory nerve neurophonic. The pure-tone audiometric thresholds were also measured with same equipment used for ECoG measurements. Results: Difference responses were observed in all 21 implanted ears, whereas summation response waveforms were observed in only 18 ears. The ECoG thresholds strongly correlated (r2 = 0.87, n = 150 for difference response; r2 = 0.82, n = 72 for summation response) with audiometric thresholds. The mean difference between the difference response and audiometric thresholds was −3.2 (±9.0) dB, while the mean difference between summation response and audiometric thresholds was −14 (±11) dB. In four out of 37 measurements, difference responses were measured to frequencies where no behavioral thresholds were present. Conclusions: ECoG thresholds may provide a useful metric for the assessment of residual hearing in cochlear implant subjects for whom it is not possible to perform behavioral audiometric testing.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdPz0R
via IFTTT

The Effect of Interaural Mismatches on Contralateral Unmasking With Single-Sided Vocoders

imageObjectives: Cochlear-implant (CI) users with single-sided deafness (SSD)—that is, one normal-hearing (NH) ear and one CI ear—can obtain some unmasking benefits when a mixture of target and masking voices is presented to the NH ear and a copy of just the masking voices is presented to the CI ear. NH listeners show similar benefits in a simulation of SSD-CI listening, whereby a mixture of target and masking voices is presented to one ear and a vocoded copy of the masking voices is presented to the opposite ear. However, the magnitude of the benefit for SSD-CI listeners is highly variable across individuals and is on average less than for NH listeners presented with vocoded stimuli. One possible explanation for the limited benefit observed for some SSD-CI users is that temporal and spectral discrepancies between the acoustic and electric ears might interfere with contralateral unmasking. The present study presented vocoder simulations to NH participants to examine the effects of interaural temporal and spectral mismatches on contralateral unmasking. Design: Speech-reception performance was measured in a competing-talker paradigm for NH listeners presented with vocoder simulations of SSD-CI listening. In the monaural condition, listeners identified target speech masked by two same-gender interferers, presented to the left ear. In the bilateral condition, the same stimuli were presented to the left ear, but the right ear was presented with a noise-vocoded copy of the interfering voices. This paradigm tested whether listeners could integrate the interfering voices across the ears to better hear the monaural target. Three common distortions inherent in CI processing were introduced to the vocoder processing: spectral shifts, temporal delays, and reduced frequency selectivity. Results: In experiment 1, contralateral unmasking (i.e., the benefit from adding the vocoded maskers to the second ear) was impaired by spectral mismatches of four equivalent rectangular bandwidths or greater. This is equivalent to roughly a 3.6-mm mismatch between the cochlear places stimulated in the electric and acoustic ears, which is on the low end of the average expected mismatch for SSD-CI listeners. In experiment 2, performance was negatively affected by a temporal mismatch of 24 ms or greater, but not for mismatches in the 0 to 12 ms range expected for SSD-CI listeners. Experiment 3 showed an interaction between spectral shift and spectral resolution, with less effect of interaural spectral mismatches when the number of vocoder channels was reduced. Experiment 4 applied interaural spectral and temporal mismatches in combination. Performance was best when both frequency and timing were aligned, but in cases where a mismatch was present in one dimension (either frequency or latency), the addition of mismatch in the second dimension did not further disrupt performance. Conclusions: These results emphasize the need for interaural alignment—in timing and especially in frequency—to maximize contralateral unmasking for NH listeners presented with vocoder simulations of SSD-CI listening. Improved processing strategies that reduce mismatch between the electric and acoustic ears of SSD-CI listeners might improve their ability to obtain binaural benefits in multitalker environments.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdxUqj
via IFTTT

Long-Term Synergistic Interaction of Cisplatin- and Noise-Induced Hearing Losses

imageObjective: Past experiments in the literature have shown that cisplatin interacts synergistically with noise to create hearing loss. Much of the previous work on the synergistic interaction of noise and cisplatin tested exposures that occurred very close together in time. The present study assessed whether rats that have been exposed to cisplatin continue to show increased susceptibility to noise-induced hearing loss months after conclusion of the cisplatin exposure. Design: Thirty-two Fischer 344/NHsd rats were exposed to one of five conditions: (1) cisplatin exposure followed by immediate cochlear tissue harvest, (2) cisplatin exposure and a 20-week monitoring period before tissue harvest, (3) cisplatin exposure followed immediately by noise exposure, (4) cisplatin exposure followed by noise exposure 16 weeks later, and (5) noise exposure without cisplatin exposure. The cisplatin exposure was an 8-week interval in which cisplatin was given every 2 weeks. Cochlear injury was evaluated using auditory brainstem response thresholds, P1 wave amplitudes, and postmortem outer hair cell counts. Results: The 8-week cisplatin exposure induced little threshold shift or P1 amplitude loss, and a small lesion of missing outer hair cells in the basal half of the cochlea. The rats exposed to noise immediately after the cisplatin exposure interval showed a synergistic interaction of cisplatin and noise. The group exposed to noise 16 weeks after the cisplatin exposure interval also showed more severe threshold shift and outer hair cell loss than control subjects. The controls exposed to cisplatin and monitored for 20 weeks showed little threshold shift or outer hair cell loss, but did show P1 wave amplitude changes over the 20-week monitoring period. Conclusions: The results from the groups exposed to cisplatin followed by noise, combined with the findings from the cisplatin- and noise-only groups, suggest that the cisplatin induced cochlear injuries that were not severe enough to result in threshold shift, but left the cochlea in a state of heightened susceptibility to future injury. The heightened susceptibility to noise injury was still present 16 weeks after the conclusion of the cisplatin exposure.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzGri
via IFTTT

Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review

imageObjectives: To undertake a systematic review of available evidence on the effect of hearing impairment and hearing aid amplification on listening effort. Two research questions were addressed: Q1) does hearing impairment affect listening effort? and Q2) can hearing aid amplification affect listening effort during speech comprehension? Design: English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, and PsycINFO from inception to August 2014. References of eligible studies were checked. The Population, Intervention, Control, Outcomes, and Study design strategy was used to create inclusion criteria for relevance. It was not feasible to apply a meta-analysis of the results from comparable studies. For the articles identified as relevant, a quality rating, based on the 2011 Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines, was carried out to judge the reliability and confidence of the estimated effects. Results: The primary search produced 7017 unique hits using the keywords: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Of these, 41 articles fulfilled the Population, Intervention, Control, Outcomes, and Study design selection criteria of: experimental work on hearing impairment OR hearing aid technologies AND listening effort OR fatigue during speech perception. The methods applied in those articles were categorized into subjective, behavioral, and physiological assessment of listening effort. For each study, the statistical analysis addressing research question Q1 and/or Q2 was extracted. In seven articles more than one measure of listening effort was provided. Evidence relating to Q1 was provided by 21 articles that reported 41 relevant findings. Evidence relating to Q2 was provided by 27 articles that reported 56 relevant findings. The quality of evidence on both research questions (Q1 and Q2) was very low, according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. Conclusion: In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzTuB
via IFTTT

Prevalence, Incidence Proportion, and Heritability for Tinnitus: A Longitudinal Twin Study

imageObjectives: The purpose of this longitudinal twin study was to explore the effect of tinnitus on hearing thresholds and threshold shifts over two decades and to investigate the genetic contribution to tinnitus in a male twin cohort (n = 1114 at baseline and 583 at follow-up). The hypothesis was that participants with faster hearing deterioration had a higher risk for developing tinnitus and there is an underlying role of genetic influences on tinnitus. Design: Male mono- and dizygotic twin pairs, born between 1914 and 1958 were included. Mixed models were used for comparison of hearing threshold shifts, adjusted for age. A co-twin comparison was made within pairs discordant for tinnitus. The relative influence of genetic and environmental factors was estimated by genetic modeling. Results: The overall prevalence of tinnitus was 13.5% at baseline ( age 50) and 34.4% at follow-up ( age 67). The overall incidence proportion was 27.8%. Participants who reported tinnitus at baseline or at both time points were older. At baseline, the hearing thresholds differed between tinnitus cases and controls at all frequencies. New tinnitus cases at follow-up had the greatest hearing threshold shift at the high-frequency area compared with the control group. Within pairs, the tinnitus twin had poorer hearing than his unaffected co-twin, more so for dizygotic than monozygotic twin pairs. The relative proportion of additive genetic factors was approximately 0.40 at both time points, and the influence of individual-specific environment was 0.56 to 0.61. The influence of genetic factors on tinnitus was largely independent of genetic factors for hearing thresholds. Conclusions: Our hypotheses were confirmed: The fastest hearing deterioration occurred for new tinnitus cases. A moderate genetic influence for tinnitus was confirmed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzGaM
via IFTTT

Response to Letter to the Editor: RE: Henry, J.A., Frederick, M., Sell, S, Griest, S., Abrams, H. (2015) Validation of a Novel Combination Hearing Aid and Tinnitus Therapy Device, Ear Hear, 36(1): 42–52

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzFDK
via IFTTT

Sequential Bilateral Cochlear Implantation in Children: Outcome of the Second Implant and Long-Term Use

imageObjectives: The aim of this retrospective cohort study was to assess speech perception outcomes of second-side cochlear implants (CI2) relative to first-side implants (CI1) in 160 participants who received their CI1 as a child. The predictive factors of CI2 speech perception outcomes were investigated. In addition, CI2 device use predictive models were assessed using the categorical variable of participant’s decision to use CI2 for a minimum of 5 years after surgery. Findings from a prospective study that evaluated the bilateral benefit for speech recognition in noise in a participant subgroup (n = 29) are also presented. Design: Participants received CI2 between 2003 and 2009 (and CI1 between 1988 and 2008), and were observed from surgery to a minimum of 5 years after sequential surgery. Group A (n = 110) comprised prelingually deaf children (severe to profound) with no or little acquired oral language before implantation, while group B (n = 50) comprised prelingually deaf children with acquired language before implantation, in addition to perilingually and postlingually deaf children. Speech perception outcomes included the monosyllable test score or the closed-set Early Speech Perception test score if the monosyllable test was too difficult. To evaluate bilateral benefit for speech recognition in noise, participants were tested with the Hearing in Noise test in bilateral and “best CI” test conditions with noise from the front and noise from either side. Bilateral advantage was calculated by subtracting the Hearing in Noise test speech reception thresholds in noise obtained in the bilateral listening mode from those obtained in the unilateral “best CI” mode. Results: On average, CI1 speech perception was 28% better than CI2 performance in group A, the same difference was 20% in group B. A small bilateral speech perception benefit of using CI2 was measured, 3% in group A and 7% in group B. Longer interimplant interval predicted poorer CI2 speech perception in group A, but only for those who did not use a hearing aid in the interimplant interval in group B. At least 5 years after surgery, 25% of group A and 10% of group B did not use CI2. In group A, prediction factors for nonuse of CI2 were longer interimplant intervals or CI2 age. Large difference in speech perception between the two sides was a predictor for CI2 nonuse in both groups. Bilateral advantage for speech recognition in noise was mainly obtained for the condition with noise near the “best CI”; the addition of a second CI offered a new head shadow benefit. A small mean disadvantage was measured when the noise was located opposite to the “best CI.” However, the latter was not significant. Conclusions: Generally, in both groups, if CI2 did not become comparable with CI1, participants were more likely to choose not to use CI2 after some time. In group A, increased interimplant intervals predicted poorer CI2 speech perception results and increased the risk of not using CI2 at a later date. Bilateral benefit was mainly obtained when noise was opposite to CI2, introducing a new head shadow benefit.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzEzG
via IFTTT

Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

imageObjectives: The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design: Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results: Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions: The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdzKaw
via IFTTT

Dichotic Digits Test Performance Across the Ages: Results From Two Large Epidemiologic Cohort Studies

imageObjectives: The Dichotic Digits test (DDT) has been widely used to assess central auditory processing but there is limited information on observed DDT performance in a general population. The purpose of the study was to determine factors related to DDT performance in a large cohort spanning the adult age range. Design: The study was cross-sectional and subjects were participants in the Epidemiology of Hearing Loss Study (EHLS), a population-based investigation of age-related hearing loss, or the Beaver Dam Offspring Study (BOSS), a study of aging in the adult offspring of the EHLS members. Subjects seen during the 4th EHLS (2008 to 2010) or the 2nd BOSS (2010 to 2013) examination were included (N = 3655 participants [1391 EHLS, 2264 BOSS]; mean age = 61.1 years, range = 21 to 100 years). The free and right ear-directed recall DDTs were administered using 25 sets of triple-digit pairs with a 70 dB HL presentation level. Pure-tone audiometric testing was conducted and the pure-tone threshold average (PTA) at 0.5, 1, 2, and 4 kHz was categorized using the worse ear: no loss = PTA ≤ 25 dB HL; mild loss = 25 40 dB HL. Cognitive impairment was defined as a Mini-Mental State Examination score

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdkF9a
via IFTTT

Directional Microphone Contralateral Routing of Signals in Cochlear Implant Users: A Within-Subjects Comparison

imageObjectives: For medical or financial reasons, bilateral cochlear implantation is not always possible in bilaterally deafened patients. In such cases, a contralateral routing of signals (CROS) device could complement the monaural implant. The goal of our study was to compare the benefit of three different conditions: (1) unilateral cochlear implant (CI) alone, (2) unilateral CI complemented with a directional CROS microphone, and (3) bilateral CIs. Design: Twelve bilateral experienced CI users were tested. Speech reception in noise and sound localization were measured in the three above-mentioned conditions. Patients evaluated which condition they presumed to be activated and the subjective benefit on a hearing scale. Results: Compared with the unilateral CI condition, the additional CROS device provided significantly better speech intelligibility in noise when speech signals came from the front or side of the CROS microphone. Only small subjective improvement was observed. Bilateral-activated CIs further improved the hearing performance. This was the only condition where sound localization was possible. Subjective evaluation showed a clear preference for the bilateral CI treatment. Conclusions: In bilateral deafened patients, bilateral implantation is the most preferable form of treatment. However, patients with one implant only could benefit from an additional directional microphone CROS device.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdJe5J
via IFTTT

Children With Cochlear Implants and Their Parents: Relations Between Parenting Style and Children’s Social-Emotional Functioning

imageObjectives: Parenting a child who has a severe or profound hearing loss can be challenging and at times stressful, and might cause parents to use more adverse parenting styles compared with parents of hearing children. Parenting styles are known to impact children’s social-emotional development. Children with a severe to profound hearing loss may be more reliant on their parents in terms of their social-emotional development when compared with their hearing peers who typically have greater opportunities to interact with and learn from others outside their family environment. Identifying the impact which parenting styles pertain on the social-emotional development of children who have cochlear implants (CIs) could help advance these children’s well-being. Therefore, the authors compared parenting styles of parents with hearing children and of parents with children who have a CI, and examined the relations between parenting styles and two key aspects of children’s social-emotional functioning: emotion regulation and empathy. Design: Ninety-two hearing parents and their children (aged 1 to 5 years old), who were either hearing (n = 46) or had a CI (n = 46), participated in this cross-sectional study. Parents completed questionnaires concerning their parenting styles (i.e., positive, negative and uninvolved), and regarding the extent to which their children expressed negative emotions (i.e., anger and sadness) and empathy. Furthermore, an emotion-regulation task measuring negative emotionality was administered to the children. Results: No differences in reported parenting styles were observed between parents of hearing children and parents of children with a CI. In addition, negative and uninvolved parenting styles were related to higher levels of negative emotionality in both groups of children. No relation was found between positive parenting and children’s social-emotional functioning. Hearing status did not moderate these relationships. Language mediated the relationship between parenting styles and children’s social-emotional functioning. Conclusions: Children’s hearing status did not impact parenting styles. This may be a result of the support that parents of children with a CI receive during their enrollment in the rehabilitation program preceding and after implantation. Rehabilitation programs should dedicate more attention to informing parents about the impact of parenting behaviors on children’s social-emotional functioning. Offering parenting courses as part of the program could promote children’s well-being. Future longitudinal research should address the directionality of the relations between parenting styles and children’s social-emotional functioning.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qdkDhy
via IFTTT

Letter to the Editor: Reporting of Data to Inform the Design of a Definitive Trial Re: Henry, J.A., Frederick, M., Sell, S., Griest, S., Abrams, H. (2015). Validation of a Novel Combination Hearing Aid and Tinnitus Therapy Device, Ear Hear, 36(1): 42–52

No abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnBgEa
via IFTTT

Effects of Stimulus Polarity and Artifact Reduction Method on the Electrically Evoked Compound Action Potential

imageObjective: Previous research from our laboratory comparing electrically evoked compound action potential (ECAP) artifact reduction methods has shown larger amplitudes and lower thresholds with cathodic-leading forward masking (CathFM) than with alternating polarity (AltPol). One interpretation of this result is that the anodic-leading phase used with AltPol elicits a less excitatory response (in contrast to results from recent studies with humans), which when averaged with responses to cathodic-leading stimuli, results in smaller amplitudes. Another interpretation is that the latencies of the responses to anodic- and cathodic-leading pulses differ, which when averaged together, result in smaller amplitudes than for either polarity alone due to temporal smearing. The purpose of this study was to separate the effects of stimulus polarity and artifact reduction method to determine the relative effects of each. Design: This study used a within-subjects design. ECAP growth functions were obtained using CathFM, anodic-leading forward masking (AnodFM), and AltPol for 23 CI recipients (N = 13 Cochlear and N = 10 Advanced Bionics). N1 latency, amplitude, slope of the amplitude growth function, and threshold were compared across methods. Data were analyzed separately for each manufacturer due to inherent differences between devices. Results: N1 latencies were significantly shorter for AnodFM than for CathFM and AltPol for both Cochlear and Advanced Bionics participants. Amplitudes were larger for AnodFM than for either CathFM or AltPol for Cochlear recipients; amplitude was not significantly different across methods for Advanced Bionics recipients. Slopes were shallowest for CathFM for Cochlear subjects, but were not significantly different among methods for Advanced Bionics subjects. Thresholds with AltPol were significantly higher than both FM methods for Cochlear recipients; there was no difference in threshold across methods for the Advanced Bionics recipients. Conclusions: For Cochlear devices, the smaller amplitudes and higher thresholds observed for AltPol seem to be the result of latency differences between polarities. These results suggest that AltPol is not ideal for managing stimulus artifact for ECAP recordings. For the Advanced Bionics group, there were no significant differences among methods for amplitude, slope, or threshold, which suggests that polarity and artifact reduction method have little influence in these devices. We postulate that polarity effects are minimized for symmetrical biphasic pulses that lack an interphase gap, such as those used with Advanced Bionics devices; however, this requires further investigation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnY6M4
via IFTTT

Normative Wideband Reflectance, Equivalent Admittance at the Tympanic Membrane, and Acoustic Stapedius Reflex Threshold in Adults

imageObjectives: Wideband acoustic immittance (WAI) measures such as pressure reflectance, parameterized by absorbance and group delay, equivalent admittance at the tympanic membrane (TM), and acoustic stapedius reflex threshold (ASRT) describe middle ear function across a wide frequency range, compared with traditional tests employing a single frequency. The objective of this study was to obtain normative data using these tests for a group of normal-hearing adults and investigate test–retest reliability using a longitudinal design. Design: A longitudinal prospective design was used to obtain normative test and retest data on clinical and WAI measures. Subjects were 13 males and 20 females (mean age = 26 years). Inclusion criteria included normal audiometry and clinical immittance. Subjects were tested on two separate visits approximately 1 month apart. Reflectance and equivalent admittance at the TM were measured from 0.25 to 8.0 kHz under three conditions: at ambient pressure in the ear canal and with pressure sweeps from positive to negative pressure (downswept) and negative to positive pressure (upswept). Equivalent admittance at the TM was calculated using admittance measurements at the probe tip that were adjusted using a model of sound transmission in the ear canal and acoustic estimates of ear-canal area and length. Wideband ASRTs were measured at tympanometric peak pressure (TPP) derived from the average TPP of downswept and upswept tympanograms. Descriptive statistics were obtained for all WAI responses, and wideband and clinical ASRTs were compared. Results: Mean absorbance at ambient pressure and TPP demonstrated a broad band-pass pattern typical of previous studies. Test–retest differences were lower for absorbance at TPP for the downswept method compared with ambient pressure at frequencies between 1.0 and 1.26 kHz. Mean tympanometric peak-to-tail differences for absorbance were greatest around 1.0 to 2.0 kHz and similar for positive and negative tails. Mean group delay at ambient pressure and at TPP were greatest between 0.32 and 0.6 kHz at 200 to 300 μsec, reduced at frequencies between 0.8 and 1.5 kHz, and increased above 1.5 kHz to around 150 μsec. Mean equivalent admittance at the TM had a lower level for the ambient method than at TPP for both sweep directions below 1.2 kHz, but the difference between methods was only statistically significant for the comparison between the ambient method and TPP for the upswept tympanogram. Mean equivalent admittance phase was positive at all frequencies. Test–retest reliability of the equivalent admittance level ranged from 1 to 3 dB at frequencies below 1.0 kHz, but increased to 8 to 9 dB at higher frequencies. The mean wideband ASRT for an ipsilateral broadband noise activator was 12 dB lower than the clinical ASRT, but had poorer reliability. Conclusions: Normative data for the WAI test battery revealed minor differences for results at ambient pressure compared with tympanometric methods at TPP for reflectance, group delay, and equivalent admittance level at the TM for subjects with middle ear pressure within ±100 daPa. Test–retest reliability was better for absorbance at TPP for the downswept tympanogram compared with ambient pressure at frequencies around 1.0 kHz. Large peak-to-tail differences in absorbance combined with good reliability at frequencies between about 0.7 and 3.0 kHz suggest that this may be a sensitive frequency range for interpreting absorbance at TPP. The mean wideband ipsilateral ASRT was lower than the clinical ASRT, consistent with previous studies. Results are promising for the use of a wideband test battery to evaluate middle ear function.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnBgnE
via IFTTT

Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of Eisenberg et al. (2002)

imageObjectives: Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Design: Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Results: Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions: First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnt0Eo
via IFTTT

Three-Dimensional Force Profile During Cochlear Implantation Depends on Individual Geometry and Insertion Trauma

imageObjectives: To preserve the acoustic hearing, cochlear implantation has to be as atraumatic as possible. Therefore, understanding the impact of the cochlear geometry on insertion forces and intracochlear trauma might help to adapt and improve the electrode insertion and reduce the probability of intracochlear trauma. Design: The study was conducted on 10 fresh-frozen human temporal bones. The inner ear was removed from the temporal bone. The bony capsule covering the scala vestibuli was removed and the dissected inner ear was mounted on the three-dimensional (3D) force measurement system (Agilent technologies, Nano UTM, Santa Clare, CA). A lateral wall electrode array was inserted, and the forces were recorded in three dimensions with a sensitivity of 2 μN. Afterwards, the bones were scanned using a Skyscan 1173 micro-computed tomography (micro-CT). The obtained 3D force profiles were correlated with the videos of the insertions recorded through the microscope, and the micro-CT images. Results: A correlation was found between intracochlear force profiles measured in three different directions with intracochlear trauma detected with micro-CT imaging. The angle of insertion and the cochlear geometry had a significant impact on the electrode array insertion forces and possible insertion trauma. Intracochlear trauma occurred frequently within the first 180° from the round window, where buckling of the proximal part of the electrode carrier inside the cochlea, and rupturing of the spiral ligament was observed. Conclusions: The combination of the 3D force measurement system and micro-CT can be used to characterize the mechanical behavior of a CI electrode array and some forms of insertion trauma. Intracochlear trauma does not always correlate with higher force amplitudes, but rather with an abrupt change of force directions.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnH4xv
via IFTTT

The Change in Electrical Stimulation Levels During 24 Months Postimplantation for a Large Cohort of Adults Using the Nucleus® Cochlear Implant

imageObjectives: To examine electrical stimulation data over 24 months postimplantation in adult implant users. The first aim was to calculate mean T and C levels for seven time points, for four cochlear segments, and two array types. The second aim was to (a) analyze the degree of change in each of the T and C levels as a function of dynamic range for six consecutive time point comparisons, for the four segments, and (b) to determine the proportion of participants with an acceptable degree of change. The third aim was to examine relationships between demographic factors and degree of change. Design: T levels, C levels, and dynamic ranges were extracted for 680 adults using Nucleus implants for the following postimplant time points: 2-, 3-, 6-, 9-, 12-, 18-, and 24-month. For each time point, mean levels were calculated for the four segments. The degree of change in each of the levels was analyzed for six consecutive time point comparisons. The criterion for an acceptable degree of change was ≤20% of DR. Results: Mean T level was significantly lower for the 2-month time point compared with all time points after the 3-month time point. Mean C level was significantly lower for the 2- and 3-month time points compared with all other time points. Mean T level was significantly lower for the apical compared with all other segments and for the lower-basal compared with the upper-basal segment. Mean C level was significantly different across all four segments. Mean C level for the basal segments was 4 CLs higher for the perimodiolar array compared with the straight array. No significant differences were evident for the mean degree of change between consecutive time point comparisons. For all segments, approximately 65 to 75% of the participants showed an average acceptable degree of change in levels from the 3- to 6-month comparison. The mean degree of change in T levels was significantly greater for the basal segments compared with all other segments. The mean degree of change in levels was significantly greater for the otosclerosis group compared with all other groups, and for the prelingual onset of deafness group compared with the postlingual group. Conclusion: Given the very large cohort, this study provides evidence for the mean levels and the degree of change in these levels that should be expected for four segments in the first 24 months postimplantation for adults using Nucleus implants. The mean T and C levels were consistent after the 3- and 6-month time points postimplant, respectively. The degree of change was variable between individuals. For each segment, however, a large percentage of participants showed an average change of ≤20% in each of the T and C levels from the 3- to 6-month comparison. Given the large degree of change in levels for some groups, the results provide strong evidence in favor of frequent monitoring of levels in the first 24 months postimplantation for patients with otosclerosis, prelingual onset of deafness, and those who exhibit >20% change in levels after 3 months postimplantation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnJoEM
via IFTTT

Cold Thermal Irrigation Decreases the Ipsilateral Gain of the Vestibulo-Ocular Reflex

imageObjectives: During head rotations, neuronal firing rates increase in ipsilateral and decrease in contralateral vestibular afferents. At low accelerations, this “push-pull mechanism” is linear. At high accelerations, however, the change of firing rates is nonlinear in that the ipsilateral increase of firing rate is larger than the contralateral decrease. This mechanism of stronger ipsilateral excitation than contralateral inhibition during high-acceleration head rotation, known as Ewald’s second law, is implemented within the nonlinear pathways. The authors asked whether caloric stimulation could provide an acceleration signal high enough to influence the contribution of the nonlinear pathway to the rotational vestibulo-ocular reflex gain (rVOR gain) during head impulses. Design: Caloric warm (44°C) and cold (24, 27, and 30°C) water irrigations of the left ear were performed in 7 healthy human subjects with the lateral semicircular canals oriented approximately earth-vertical (head inclined 30° from supine) and earth-horizontal (head inclined 30° from upright). Results: With the lateral semicircular canal oriented earth-vertical, the strongest cold caloric stimulus (24°C) significantly decreased the rVOR gain during ipsilateral head impulses, while all other irrigations, irrespective of head position, had no significant effect on rVOR gains during head impulses to either side. Conclusions: Strong caloric irrigation, which can only be achieved with cold water, reduces the rVOR gain during ipsilateral head impulses and thus demonstrates Ewald’s second law in healthy subjects. This unilateral gain reduction suggests that cold-water caloric irritation shifts the set point of the nonlinear relation between head acceleration and the vestibular firing rate toward a less acceleration-sensitive zone.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pnOIaY
via IFTTT

Electrocochleography in Cochlear Implant Recipients With Residual Hearing: Comparison With Audiometric Thresholds

imageObjectives: To determine whether electrocochleography (ECoG) thresholds, especially cochlear microphonic and auditory nerve neurophonic thresholds, measured using an intracochlear electrode, can be used to predict pure-tone audiometric thresholds following cochlear implantation in ears with residual hearing. Design: Pure-tone audiometric thresholds and ECoG waveforms were measured at test frequencies from 125 to 4000 Hz in 21 Advanced Bionics cochlear implant recipients with residual hearing in the implanted ear. The “difference” and “summation” responses were computed from the ECoG waveforms measured from two alternating phases of stimulation. The interpretation is that difference responses are largely from the cochlear microphonic while summating responses are largely from the auditory nerve neurophonic. The pure-tone audiometric thresholds were also measured with same equipment used for ECoG measurements. Results: Difference responses were observed in all 21 implanted ears, whereas summation response waveforms were observed in only 18 ears. The ECoG thresholds strongly correlated (r2 = 0.87, n = 150 for difference response; r2 = 0.82, n = 72 for summation response) with audiometric thresholds. The mean difference between the difference response and audiometric thresholds was −3.2 (±9.0) dB, while the mean difference between summation response and audiometric thresholds was −14 (±11) dB. In four out of 37 measurements, difference responses were measured to frequencies where no behavioral thresholds were present. Conclusions: ECoG thresholds may provide a useful metric for the assessment of residual hearing in cochlear implant subjects for whom it is not possible to perform behavioral audiometric testing.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2qdPz0R
via IFTTT

The Effect of Interaural Mismatches on Contralateral Unmasking With Single-Sided Vocoders

imageObjectives: Cochlear-implant (CI) users with single-sided deafness (SSD)—that is, one normal-hearing (NH) ear and one CI ear—can obtain some unmasking benefits when a mixture of target and masking voices is presented to the NH ear and a copy of just the masking voices is presented to the CI ear. NH listeners show similar benefits in a simulation of SSD-CI listening, whereby a mixture of target and masking voices is presented to one ear and a vocoded copy of the masking voices is presented to the opposite ear. However, the magnitude of the benefit for SSD-CI listeners is highly variable across individuals and is on average less than for NH listeners presented with vocoded stimuli. One possible explanation for the limited benefit observed for some SSD-CI users is that temporal and spectral discrepancies between the acoustic and electric ears might interfere with contralateral unmasking. The present study presented vocoder simulations to NH participants to examine the effects of interaural temporal and spectral mismatches on contralateral unmasking. Design: Speech-reception performance was measured in a competing-talker paradigm for NH listeners presented with vocoder simulations of SSD-CI listening. In the monaural condition, listeners identified target speech masked by two same-gender interferers, presented to the left ear. In the bilateral condition, the same stimuli were presented to the left ear, but the right ear was presented with a noise-vocoded copy of the interfering voices. This paradigm tested whether listeners could integrate the interfering voices across the ears to better hear the monaural target. Three common distortions inherent in CI processing were introduced to the vocoder processing: spectral shifts, temporal delays, and reduced frequency selectivity. Results: In experiment 1, contralateral unmasking (i.e., the benefit from adding the vocoded maskers to the second ear) was impaired by spectral mismatches of four equivalent rectangular bandwidths or greater. This is equivalent to roughly a 3.6-mm mismatch between the cochlear places stimulated in the electric and acoustic ears, which is on the low end of the average expected mismatch for SSD-CI listeners. In experiment 2, performance was negatively affected by a temporal mismatch of 24 ms or greater, but not for mismatches in the 0 to 12 ms range expected for SSD-CI listeners. Experiment 3 showed an interaction between spectral shift and spectral resolution, with less effect of interaural spectral mismatches when the number of vocoder channels was reduced. Experiment 4 applied interaural spectral and temporal mismatches in combination. Performance was best when both frequency and timing were aligned, but in cases where a mismatch was present in one dimension (either frequency or latency), the addition of mismatch in the second dimension did not further disrupt performance. Conclusions: These results emphasize the need for interaural alignment—in timing and especially in frequency—to maximize contralateral unmasking for NH listeners presented with vocoder simulations of SSD-CI listening. Improved processing strategies that reduce mismatch between the electric and acoustic ears of SSD-CI listeners might improve their ability to obtain binaural benefits in multitalker environments.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2qdxUqj
via IFTTT

Long-Term Synergistic Interaction of Cisplatin- and Noise-Induced Hearing Losses

imageObjective: Past experiments in the literature have shown that cisplatin interacts synergistically with noise to create hearing loss. Much of the previous work on the synergistic interaction of noise and cisplatin tested exposures that occurred very close together in time. The present study assessed whether rats that have been exposed to cisplatin continue to show increased susceptibility to noise-induced hearing loss months after conclusion of the cisplatin exposure. Design: Thirty-two Fischer 344/NHsd rats were exposed to one of five conditions: (1) cisplatin exposure followed by immediate cochlear tissue harvest, (2) cisplatin exposure and a 20-week monitoring period before tissue harvest, (3) cisplatin exposure followed immediately by noise exposure, (4) cisplatin exposure followed by noise exposure 16 weeks later, and (5) noise exposure without cisplatin exposure. The cisplatin exposure was an 8-week interval in which cisplatin was given every 2 weeks. Cochlear injury was evaluated using auditory brainstem response thresholds, P1 wave amplitudes, and postmortem outer hair cell counts. Results: The 8-week cisplatin exposure induced little threshold shift or P1 amplitude loss, and a small lesion of missing outer hair cells in the basal half of the cochlea. The rats exposed to noise immediately after the cisplatin exposure interval showed a synergistic interaction of cisplatin and noise. The group exposed to noise 16 weeks after the cisplatin exposure interval also showed more severe threshold shift and outer hair cell loss than control subjects. The controls exposed to cisplatin and monitored for 20 weeks showed little threshold shift or outer hair cell loss, but did show P1 wave amplitude changes over the 20-week monitoring period. Conclusions: The results from the groups exposed to cisplatin followed by noise, combined with the findings from the cisplatin- and noise-only groups, suggest that the cisplatin induced cochlear injuries that were not severe enough to result in threshold shift, but left the cochlea in a state of heightened susceptibility to future injury. The heightened susceptibility to noise injury was still present 16 weeks after the conclusion of the cisplatin exposure.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2qdzGri
via IFTTT

Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review

imageObjectives: To undertake a systematic review of available evidence on the effect of hearing impairment and hearing aid amplification on listening effort. Two research questions were addressed: Q1) does hearing impairment affect listening effort? and Q2) can hearing aid amplification affect listening effort during speech comprehension? Design: English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, and PsycINFO from inception to August 2014. References of eligible studies were checked. The Population, Intervention, Control, Outcomes, and Study design strategy was used to create inclusion criteria for relevance. It was not feasible to apply a meta-analysis of the results from comparable studies. For the articles identified as relevant, a quality rating, based on the 2011 Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines, was carried out to judge the reliability and confidence of the estimated effects. Results: The primary search produced 7017 unique hits using the keywords: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Of these, 41 articles fulfilled the Population, Intervention, Control, Outcomes, and Study design selection criteria of: experimental work on hearing impairment OR hearing aid technologies AND listening effort OR fatigue during speech perception. The methods applied in those articles were categorized into subjective, behavioral, and physiological assessment of listening effort. For each study, the statistical analysis addressing research question Q1 and/or Q2 was extracted. In seven articles more than one measure of listening effort was provided. Evidence relating to Q1 was provided by 21 articles that reported 41 relevant findings. Evidence relating to Q2 was provided by 27 articles that reported 56 relevant findings. The quality of evidence on both research questions (Q1 and Q2) was very low, according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. Conclusion: In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2qdzTuB
via IFTTT