Παρασκευή 1 Ιουλίου 2016

Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners’ Recognition of English Sentences

imageObjective: Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer’s j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Design: Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Results: Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN listeners significantly more than all three non-native groups of listeners. Conclusions: Language background influenced the use and weighting of semantic and syntactic cues in a complex manner. A native language advantage existed in the effective use of both cues combined. A language-dominance effect was seen in the use of semantics. No first-language effect was present for the use of either or both cues. For all non-native listeners, syntax contributed significantly more to sentence recognition than semantics, possibly due to the fact that semantics develops more gradually than syntax in second-language acquisition. The present study provides evidence that Boothroyd and Nittrouer’s j and k factors can be successfully used to quantify the effectiveness of contextual cue use in clinically relevant, linguistically diverse populations.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29dxXrq
via IFTTT

Effects of Modified Hearing Aid Fittings on Loudness and Tone Quality for Different Acoustic Scenes

imageObjective: To compare loudness and tone-quality ratings for sounds processed via a simulated five-channel compression hearing aid fitted using NAL-NL2 or using a modification of the fitting designed to be appropriate for the type of listening situation: speech in quiet, speech in noise, music, and noise alone. Design: Ratings of loudness and tone quality were obtained for stimuli presented via a loudspeaker in front of the participant. For normal-hearing participants, levels of 50, 65, and 80 dB SPL were used. For hearing-impaired participants, the stimuli were processed via a simulated hearing aid with five-channel fast-acting compression fitted using NAL-NL2 or using a modified fitting. Input levels to the simulated hearing aid were 50, 65, and 80 dB SPL. All participants listened with one ear plugged. For speech in quiet, the modified fitting was based on the CAM2B method. For speech in noise, the modified fitting used slightly (0 to 2 dB) decreased gains at low frequencies. For music, the modified fitting used increased gains (by 5 to 14 dB) at low frequencies. For noise alone, the modified fitting used decreased gains at all frequencies (by a mean of 1 dB at low frequencies increasing to 8 dB at high frequencies). Results: For speech in quiet, ratings of loudness with the NAL-NL2 fitting were slightly lower than the mean ratings for normal-hearing participants for all levels, while ratings with CAM2B were close to normal for the two lower levels, and slightly greater than normal for the highest level. Ratings of tone quality were close to the optimum value (“just right”) for both fittings, except that the CAM2B fitting was rated as very slightly boomy for the 80-dB SPL level. For speech in noise, the ratings of loudness were very close to the normal values and the ratings of tone quality were close to the optimal value for both fittings and for all levels. For music, the ratings of loudness were close to the normal values for NAL-NL2 and slightly above normal for the modified fitting. The tone quality was rated as very slightly tinny for NAL-NL2 and very slightly boomy for the modified fitting. For noise alone, the NAL-NL2 fitting was rated as slightly louder than normal for all levels, while the modified fitting was rated as close to normal. Tone quality was rated as slightly sharper for the NAL-NL2 fitting than for the modified fitting. Conclusions: Loudness and tone quality can sometimes be made slightly closer to “normal” by modifying gains for different listening situations. The modification for music required to achieve “normal” tone quality appears to be less than used in this study.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29dxTbk
via IFTTT

Age-Related Changes in Binaural Interaction at Brainstem Level

imageObjectives: Age-related hearing loss hampers the ability to understand speech in adverse listening conditions. This is attributed to a complex interaction of changes in the peripheral and central auditory system. One aspect that may deteriorate across the lifespan is binaural interaction. The present study investigates binaural interaction at the level of the auditory brainstem. It is hypothesized that brainstem binaural interaction deteriorates with advancing age. Design: Forty-two subjects of various age participated in the study. Auditory brainstem responses (ABRs) were recorded using clicks and 500 Hz tone-bursts. ABRs were elicited by monaural right, monaural left, and binaural stimulation. Binaural interaction was investigated in two ways. First, grand averages of the binaural interaction component were computed for each age group. Second, wave V characteristics of the binaural ABR were compared with those of the summed left and right ABRs. Results: Binaural interaction in the click ABR was demonstrated by shorter latencies and smaller amplitudes in the binaural compared with the summed monaural responses. For 500 Hz tone-burst ABR, no latency differences were found. However, amplitudes were significantly smaller in the binaural than summed monaural condition. An age-effect was found for 500 Hz tone-burst, but not for click ABR. Conclusions: Brainstem binaural interaction seems to decline with age. Interestingly, these changes seem to be stimulus-dependent.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29dxBRU
via IFTTT

Maturation of Mechanical Impedance of the Skin-Covered Skull: Implications for Soft Band Bone-Anchored Hearing Systems Fitted in Infants and Young Children

imageObjectives: Little is known about the maturational changes in the mechanical properties of the skull and how they might contribute to infant–adult differences in bone conduction hearing sensitivity. The objective of this study was to investigate the mechanical impedance of the skin-covered skull for different skull positions and contact forces for groups of infants, young children, and adults. These findings provide a better understanding of how changes in mechanical impedance might contribute to developmental changes in bone conduction hearing, and might provide insight into how fitting and output verification protocols for bone-anchored hearing systems (BAHS) could be adapted for infants and young children. Design: Seventy-seven individuals participated in the study, including 63 infants and children (ages 1 month to 7 years) and 11 adults. Mechanical impedance magnitude for the forehead and temporal bone was collected for contact forces of 2, 4, and 5.4 N using an impedance head, a BAHS transducer, and a specially designed holding device. Mechanical impedance magnitude was determined across frequency using a stepped sine sweep from 100 to 10,000 Hz, and divided into low- and high-frequency sets for analysis. Results: Mechanical impedance magnitude was lowest for the youngest infants and increased throughout maturation in the low frequencies. For high frequencies, the youngest infants had the highest impedance, but only for a temporal bone placement. Impedance increased with increasing contact force for low frequencies for each age group and for both skull positions. The effect of placement was significant for high frequencies for each contact force and for each age group, except for the youngest infants. Conclusions: Our findings show that mechanical impedance properties change systematically up to 7 years old. The significant age-related differences in mechanical impedance suggest that infant–adult differences in bone conduction thresholds may be related, at least in part, to properties of the immature skull and overlying skin and tissues. These results have important implications for fitting the soft band BAHS on infants and young children. For example, verification of output force form a BAHS on a coupler designed with adult values may not be appropriate for infants. This may also hold true for transducer calibration when assessing bone conduction hearing thresholds in infants for different skull locations. The results have two additional clinical implications for fitting soft band BAHSs. First, parents should be counseled to maintain sufficient and consistent tightness so that the output from the BAHS does not change as the child moves around during everyday activities. Second, placement of a BAHS on the forehead versus the temporal bone results in changes in mechanical impedance which may contribute to a decrease in signal level at the cochlea as it has been previously demonstrated that bone conduction thresholds are poorer at the forehead compared with a temporal placement.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29dy724
via IFTTT

Auditory Impairments in HIV-Infected Children

imageObjectives: In a cross-sectional study of human immunodeficiency virus (HIV)-infected adults, the authors showed lower distortion product otoacoustic emissions (DPOAEs) in HIV+ individuals compared with controls as well as findings consistent with a central auditory processing deficit in HIV+ adults on antiretroviral therapy. The authors hypothesized that HIV+ children would also have a higher prevalence of abnormal central and peripheral hearing test results compared with HIV− controls. Design: Pure-tone thresholds, DPOAEs, and tympanometry were performed on 244 subjects (131 HIV+ and 113 HIV− subjects). Thirty-five of the HIV+, and 3 of the HIV− subjects had a history of tuberculosis treatment. Gap detection results were available for 18 HIV− and 44 HIV+ children. Auditory brainstem response results were available for 72 HIV− and 72 HIV+ children. Data from ears with abnormal tympanograms were excluded. Results: HIV+ subjects were significantly more likely to have abnormal tympanograms, histories of ear drainage, tuberculosis, or dizziness. All audiometric results were compared between groups using a two-way ANOVA with HIV status and ear drainage history as grouping variables. Mean audiometric thresholds, gap detection thresholds, and auditory brainstem response latencies did not differ between groups, although the HIV+ group had a higher proportion of individuals with a hearing loss >25 dB HL in the better ear. The HIV+ group had reduced DPOAE levels (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29dxF3X
via IFTTT

Confirmation of PDZD7 as a Nonsyndromic Hearing Loss Gene

imageObjective: PDZD7 was identified in 2009 in a family with apparent nonsyndromic sensorineural hearing loss. However, subsequent clinical reports have associated PDZD7 with digenic Usher syndrome, the most common cause of deaf-blindness, or as a modifier of retinal disease. No further reports have validated this gene for nonsyndromic hearing loss, intuitively calling correct genotype–phenotype association into question. This report describes a validating second case for biallelic mutations in PDZD7 causing nonsyndromic mild to severe sensorineural hearing loss. It also provides detailed audiometric and ophthalmologic data excluding Usher syndrome in both the present proband (proband 1) and the first proband described in 2009 (proband 2). Design: Proband 1 was sequenced using a custom-designed next generation sequencing panel consisting of 151 deafness genes. Bioinformatics analysis and filtering disclosed two PDZD7 sequence variants (c.1648C>T, p.Q550* and c.2107del, p.S703Vfs*20). Segregation testing followed in the family. For both probands, audiograms were collected and analyzed for progressive hearing loss and detailed ophthalmic evaluations were performed including electroretinography. Results: Proband 1 demonstrated a prelingual, nonsyndromic, sensorineural hearing loss that progressed in the higher frequencies between 4 and 9 years old. PDZD7 segregation analysis confirmed biallelic inheritance (compound heterozygosity). Mutation analysis determined the c.1648C>T mutation as novel and reported the c.2107del deletion as rs397516633 with a calculated minor allele frequency of 0.000018. Clinical evaluation spanning well over a decade in proband 2 disclosed bilateral, nonprogressive hearing loss. Both probands showed healthy retinas, excluding Usher syndrome-like changes in the eye. Conclusions: PDZD7 is confirmed as a bona fide autosomal recessive nonsyndromic hearing loss gene. In both probands, there was no evidence of impaired vision or ophthalmic pathology. As the current understanding of PDZD7 mutations bridge Mendelian and complex phenotypes, the authors recommend careful variant interpretation, since PDZD7 is one of many genes associated with both Usher syndrome and autosomal recessive nonsyndromic hearing loss. Additional reports are required for understanding the complete phenotypic spectrum of this gene, including the possibility of high-frequency progression, as well as noise-induced hearing loss susceptibility in adult carriers. This report rules out all forms of Usher syndrome with an onset before 12 and 15 years old in probands 1 and 2, respectively. However, due to the young ages of the probands, this report is uninformative regarding older patients.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29iydZy
via IFTTT

Impact of Hearing Aid Technology on Outcomes in Daily Life I: The Patients’ Perspective

imageObjectives: One of the challenges facing hearing care providers when recommending hearing aids is the choice of device technology level. Major manufacturers market families of hearing aids that are described as spanning the range from basic technology to premium technology. Premium technology hearing aids include acoustical processing capabilities (features) that are not found in basic technology instruments. These premium features are intended to yield improved hearing in daily life compared with basic-feature devices. However, independent research that establishes the incremental effectiveness of premium-feature devices compared with basic-feature devices is lacking. This research was designed to explore reported differences in hearing abilities for adults using premium- and basic-feature hearing aids in their daily lives. Design: This was a single-blinded, repeated, crossover trial in which the participants were blinded. All procedures were carefully controlled to limit researcher bias. Forty-five participants used carefully fitted bilateral hearing aids for 1 month and then provided data to describe the hearing improvements or deficiencies noted in daily life. Typical participants were 70 years old with mild to moderate adult-onset hearing loss bilaterally. Each participant used four pairs of hearing aids: premium- and basic-feature devices from brands marketed by each of two major manufacturers. Participants were blinded about the devices they used and about the research questions. Results: All of the outcomes were designed to capture the participant’s point of view about the benefits of the hearing aids. Three types of data were collected: change in hearing-related quality of life, extent of agreement with six positively worded statements about everyday hearing with the hearing aids, and reported preferences between the premium- and basic-feature devices from each brand as well as across all four research hearing aids combined. None of these measures yielded a statistically significant difference in outcomes between premium- and basic-feature devices. Participants did not report better outcomes with premium processing with any measure. Conclusions: It could reasonably be asserted that the patient’s perspective is the gold standard for hearing aid effectiveness. While the acoustical processing provided by premium features can potentially improve scores on tests conducted in contrived conditions in a laboratory, or on specific items in a questionnaire, this does not ensure that the processing will be of noteworthy benefit when the hearing aid is used in the real world challenges faced by the patient. If evidence suggests the patient cannot detect that premium features yield improvements over basic features in daily life, what is the responsibility of the provider in recommending hearing aid technology level? In the present research, there was no evidence to suggest that premium-feature devices yielded better outcomes than basic-feature devices from the patient’s point of view. All of the research hearing aids were substantially, but equally, helpful. Further research is needed on this topic with other hearing aids and other manufacturers. In the meantime, providers should insist on scientifically credible independent evidence to support effectiveness claims for any hearing help devices.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLzFf
via IFTTT

Development of Insertion Models Predicting Cochlear Implant Electrode Position

imageObjectives: To assess the possibility to define a preferable range for electrode array insertion depth and surgical insertion distance for which frequency mismatch is minimalized. To develop a surgical insertion guidance tool by which a preferred target angle can be attained using preoperative available anatomical data and surgically controllable insertion distance. Design: Multiplanar reconstructions of pre- and post-operative CT scans were evaluated in a population of 336 patients implanted with the CII HiFocus1 or HiFocus1J implant (26 bilaterally implantees included). Cochlear radial distances were measured on four measurement axes on the preoperative CT scan. Electrode contact positions were obtained in angular depth, distance from the round window and to the modiolus center. Frequency mismatch was calculated based on the yielded frequency as a function of the angular position per contact. Cochlear diameters were clustered into three cochlear size groups with K-sample clustering. Using spiral fitting and general linear regression modeling, the feasibility of different insertion models with cochlear size measures and surgical insertion as input parameters was analyzed. The final developed model was internally validated with bootstrapping to calculate the optimism-corrected R2. Results: Frequency mismatch was minimalized for surgical insertion of 6.7 mm and insertion depth of 484°. Cochlear size clusters were derived consisting of a “small” (N = 117), “medium” (N = 171), and “large” (N = 74) cluster with mean insertion depths of 506°, 480°, and 441°, respectively. The relation between surgical insertion (LE16) and insertion depth (θE1) differed significantly between the three clusters (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLOQC
via IFTTT

How Can Public Health Approaches and Perspectives Advance Hearing Health Care?

imageThis commentary explores the role of public health programs and themes on hearing health care. Ongoing engagement within the hearing professional community is needed to determine how to change the landscape and identify important features in the evolution of population hearing health care. Why and how to leverage existing public health programs and develop new programs to improve hearing health in older individuals is an important topic. Hearing professionals are encouraged to reflect on these themes and recommendations and join the discussion about the future of hearing science on a population level.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLh19
via IFTTT

fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants

imageObjectives: Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Design: Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLwck
via IFTTT

A Randomized Control Trial: Supplementing Hearing Aid Use with Listening and Communication Enhancement (LACE) Auditory Training

imageObjective: To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. Design: A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling—the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. Results: No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Conclusions: Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone. Potential benefits of AT may be different than those assessed by the performance and self-report measures utilized here. Individual differences not assessed in this study should be examined to evaluate whether AT with LACE has any benefits for particular individuals. Clinically, these findings suggest that audiologists may want to temper the expectations of their patients who embark on LACE training.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLKAt
via IFTTT

Testing Speech Recognition in Spanish-English Bilingual Children with the Computer-Assisted Speech Perception Assessment (CASPA): Initial Report

imageThis study evaluated the English version of Computer-Assisted Speech Perception Assessment (E-CASPA) with Spanish-English bilingual children. E-CASPA has been evaluated with monolingual English speakers ages 5 years and older, but it is unknown whether a separate norm is necessary for bilingual children. Eleven Spanish-English bilingual and 12 English monolingual children (6 to 12 years old) with normal hearing participated. Responses were scored by word, phoneme, consonant, and vowel. Regardless of scores, performance across three signal-to-noise ratio conditions was similar between groups, suggesting that the same norm can be used for both bilingual and monolingual children.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kMoxu
via IFTTT

Temporal Response Properties of the Auditory Nerve in Implanted Children with Auditory Neuropathy Spectrum Disorder and Implanted Children with Sensorineural Hearing Loss

imageObjective: This study aimed to (1) characterize temporal response properties of the auditory nerve in implanted children with auditory neuropathy spectrum disorder (ANSD), and (2) compare results recorded in implanted children with ANSD with those measured in implanted children with sensorineural hearing loss (SNHL). Design: Participants included 28 children with ANSD and 29 children with SNHL. All subjects used cochlear nucleus devices in their test ears. Both ears were tested in 6 children with ANSD and 3 children with SNHL. For all other subjects, only one ear was tested. The electrically evoked compound action potential (ECAP) was measured in response to each of the 33 pulses in a pulse train (excluding the second pulse) for one apical, one middle-array, and one basal electrode. The pulse train was presented in a monopolar-coupled stimulation mode at 4 pulse rates: 500, 900, 1800, and 2400 pulses per second. Response metrics included the averaged amplitude, latencies of response components and response width, the alternating depth and the amount of neural adaptation. These dependent variables were quantified based on the last six ECAPs or the six ECAPs occurring within a time window centered around 11 to 12 msec. A generalized linear mixed model was used to compare these dependent variables between the 2 subject groups. The slope of the linear fit of the normalized ECAP amplitudes (re. amplitude of the first ECAP response) over the duration of the pulse train was used to quantify the amount of ECAP increment over time for a subgroup of 9 subjects. Results: Pulse train-evoked ECAPs were measured in all but 8 subjects (5 with ANSD and 3 with SNHL). ECAPs measured in children with ANSD had smaller amplitude, longer averaged P2 latency and greater response width than children with SNHL. However, differences in these two groups were only observed for some electrodes. No differences in averaged N1 latency or in the alternating depth were observed between children with ANSD and children with SNHL. Neural adaptation measured in these 2 subject groups was comparable for relatively short durations of stimulation (i.e., 11 to 12 msec). Children with ANSD showed greater neural adaptation than children with SNHL for a longer duration of stimulation. Amplitudes of ECAP responses rapidly declined within the first few milliseconds of stimulation, followed by a gradual decline up to 64 msec after stimulus onset in the majority of subjects. This decline exhibited an alternating pattern at some pulse rates. Further increases in pulse rate diminished this alternating pattern. In contrast, ECAPs recorded from at least one stimulating electrode in six ears with ANSD and three ears with SNHL showed a clear increase in amplitude over the time course of stimulation. The slope of linear regression functions measured in these subjects was significantly greater than zero. Conclusions: Some but not all aspects of temporal response properties of the auditory nerve measured in this study differ between implanted children with ANSD and implanted children with SNHL. These differences are observed for some but not all electrodes. A new neural response pattern is identified. Further studies investigating its underlying mechanism and clinical relevance are warranted.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLBgk
via IFTTT

A Comparison of Alternating Polarity and Forward Masking Artifact-Reduction Methods to Resolve the Electrically Evoked Compound Action Potential

imageObjective: Cochlear implant manufacturers utilize different artifact-reduction methods to measure electrically evoked compound action potentials (ECAPs) in the clinical software. Two commercially available artifact-reduction techniques include forward masking (FwdMsk) and alternating polarity (AltPol). AltPol assumes that responses to the opposing polarities are equal, which is likely problematic. On the other hand, FwdMsk can yield inaccurate waveforms if the masker does not effectively render all neurons into a refractory state. The goal of this study was to compare ECAP thresholds, amplitudes, and slopes of the amplitude growth functions (AGFs) using FwdMsk and AltPol to determine whether the two methods yield similar results. Design: ECAP AGFs were obtained from three electrode regions (basal, middle, and apical) across 24 ears in 20 Cochlear Ltd. recipients using both FwdMsk and AltPol methods. AltPol waveforms could not be resolved for recipients of devices with the older-generation chip (CI24R(CS); N = 6). Results: Results comparing FwdMsk and AltPol in the CI24RE- and CI512-generation devices showed significant differences in threshold, AGF slope, and amplitude between methods. FwdMsk resulted in lower visual-detection thresholds (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLscE
via IFTTT

Validation of a French-Language Version of the Spatial Hearing Questionnaire, Cluster Analysis and Comparison with the Speech, Spatial, and Qualities of Hearing Scale

imageObjectives: To validate a French-language version of the spatial hearing questionnaire (SHQ), including investigating its internal structure using cluster analysis and exploring its construct validity on a large population of hearing-impaired (HI) and normal-hearing (NH) subjects, and to compare the SHQ with the speech, spatial, and qualities of hearing scale (SSQ) in the same population. Design: The SHQ was translated in accordance with the principles of the Universalist Model of cross-cultural adaptation of patient-reported outcome instruments. The SSQ and SHQ were then presented in a counterbalanced order, in a self-report mode, in a population of 230 HI subjects (mean age = 54 years and pure-tone audiometry [PTA] on the better ear = 28 dB HL) and 100 NH subjects (mean age = 21 years). The SHQ feasibility, readability, and psychometric properties were systematically investigated using reliability indices, cluster, and factor analyses and multiregression analyses. SHQ characteristics were compared both to different literature data obtained with different language versions and to the SSQ scores obtained in the same population. Results: Internal validity was high and very good reproducibility of scores and intersubject variability were obtained across the 24 items between the English and French SHQ for NH subjects. Factor and cluster analyses concurred in identifying five correlated factors, corresponding to several SHQ subscales: (1) speech in noise (corresponding to SHQ subscales 7 and 8), (2) localization of voice sounds from behind, (3) speech in quiet (corresponding to SHQ subscale 1), (4) localization of everyday sounds, and (5) localization of voices and music (corresponding to parts of the SHQ localization subscale). Correlations between SSQ subscales and SHQ factors identified the greatest correlations between SHQ factors 2, 4, and 5 and SSQ spatial subscales, whereas SHQ factor 1 had the greatest correlation with SSQ_speech. SHQ and SSQ scores were similar, whether in NH subjects (8.5 versus 8.4) or in HI subjects (6.6 for both), sharing more than 80% of variance. The SHQ localization subscale gave similar scores as the SSQ spatial subscale, sharing more than 75% of variance. Construct validity identified better ear PTA and PTA asymmetry as the two main predictors of SHQ scores, to a degree similar to that seen for the SSQ. The SHQ was shorter, easier to read and less sensitive to the number of years of formal education than the SSQ, but this came at a cost of ecological validity, which was rated higher for the SSQ than for the SHQ. Conclusions: A comparison of factor analysis outcomes among the English, Dutch, and French versions of the SHQ confirmed good conceptual equivalence across languages and robustness of the SHQ for use in international settings. In addition, SHQ and SSQ scores showed remarkable similarities, suggesting the possibility of extrapolating the results from one questionnaire to the other. Although the SHQ was originally designed in a population of cochlear implant patients, the present results show that its usefulness could easily be extended to noncochlear-implanted, HI subjects.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLO2W
via IFTTT

To Ear and Hearing Peer Reviewers: Thank You

No abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxCFs
via IFTTT

Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners’ Recognition of English Sentences

imageObjective: Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer’s j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Design: Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Results: Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN listeners significantly more than all three non-native groups of listeners. Conclusions: Language background influenced the use and weighting of semantic and syntactic cues in a complex manner. A native language advantage existed in the effective use of both cues combined. A language-dominance effect was seen in the use of semantics. No first-language effect was present for the use of either or both cues. For all non-native listeners, syntax contributed significantly more to sentence recognition than semantics, possibly due to the fact that semantics develops more gradually than syntax in second-language acquisition. The present study provides evidence that Boothroyd and Nittrouer’s j and k factors can be successfully used to quantify the effectiveness of contextual cue use in clinically relevant, linguistically diverse populations.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxXrq
via IFTTT

Effects of Modified Hearing Aid Fittings on Loudness and Tone Quality for Different Acoustic Scenes

imageObjective: To compare loudness and tone-quality ratings for sounds processed via a simulated five-channel compression hearing aid fitted using NAL-NL2 or using a modification of the fitting designed to be appropriate for the type of listening situation: speech in quiet, speech in noise, music, and noise alone. Design: Ratings of loudness and tone quality were obtained for stimuli presented via a loudspeaker in front of the participant. For normal-hearing participants, levels of 50, 65, and 80 dB SPL were used. For hearing-impaired participants, the stimuli were processed via a simulated hearing aid with five-channel fast-acting compression fitted using NAL-NL2 or using a modified fitting. Input levels to the simulated hearing aid were 50, 65, and 80 dB SPL. All participants listened with one ear plugged. For speech in quiet, the modified fitting was based on the CAM2B method. For speech in noise, the modified fitting used slightly (0 to 2 dB) decreased gains at low frequencies. For music, the modified fitting used increased gains (by 5 to 14 dB) at low frequencies. For noise alone, the modified fitting used decreased gains at all frequencies (by a mean of 1 dB at low frequencies increasing to 8 dB at high frequencies). Results: For speech in quiet, ratings of loudness with the NAL-NL2 fitting were slightly lower than the mean ratings for normal-hearing participants for all levels, while ratings with CAM2B were close to normal for the two lower levels, and slightly greater than normal for the highest level. Ratings of tone quality were close to the optimum value (“just right”) for both fittings, except that the CAM2B fitting was rated as very slightly boomy for the 80-dB SPL level. For speech in noise, the ratings of loudness were very close to the normal values and the ratings of tone quality were close to the optimal value for both fittings and for all levels. For music, the ratings of loudness were close to the normal values for NAL-NL2 and slightly above normal for the modified fitting. The tone quality was rated as very slightly tinny for NAL-NL2 and very slightly boomy for the modified fitting. For noise alone, the NAL-NL2 fitting was rated as slightly louder than normal for all levels, while the modified fitting was rated as close to normal. Tone quality was rated as slightly sharper for the NAL-NL2 fitting than for the modified fitting. Conclusions: Loudness and tone quality can sometimes be made slightly closer to “normal” by modifying gains for different listening situations. The modification for music required to achieve “normal” tone quality appears to be less than used in this study.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxTbk
via IFTTT

Age-Related Changes in Binaural Interaction at Brainstem Level

imageObjectives: Age-related hearing loss hampers the ability to understand speech in adverse listening conditions. This is attributed to a complex interaction of changes in the peripheral and central auditory system. One aspect that may deteriorate across the lifespan is binaural interaction. The present study investigates binaural interaction at the level of the auditory brainstem. It is hypothesized that brainstem binaural interaction deteriorates with advancing age. Design: Forty-two subjects of various age participated in the study. Auditory brainstem responses (ABRs) were recorded using clicks and 500 Hz tone-bursts. ABRs were elicited by monaural right, monaural left, and binaural stimulation. Binaural interaction was investigated in two ways. First, grand averages of the binaural interaction component were computed for each age group. Second, wave V characteristics of the binaural ABR were compared with those of the summed left and right ABRs. Results: Binaural interaction in the click ABR was demonstrated by shorter latencies and smaller amplitudes in the binaural compared with the summed monaural responses. For 500 Hz tone-burst ABR, no latency differences were found. However, amplitudes were significantly smaller in the binaural than summed monaural condition. An age-effect was found for 500 Hz tone-burst, but not for click ABR. Conclusions: Brainstem binaural interaction seems to decline with age. Interestingly, these changes seem to be stimulus-dependent.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxBRU
via IFTTT

Maturation of Mechanical Impedance of the Skin-Covered Skull: Implications for Soft Band Bone-Anchored Hearing Systems Fitted in Infants and Young Children

imageObjectives: Little is known about the maturational changes in the mechanical properties of the skull and how they might contribute to infant–adult differences in bone conduction hearing sensitivity. The objective of this study was to investigate the mechanical impedance of the skin-covered skull for different skull positions and contact forces for groups of infants, young children, and adults. These findings provide a better understanding of how changes in mechanical impedance might contribute to developmental changes in bone conduction hearing, and might provide insight into how fitting and output verification protocols for bone-anchored hearing systems (BAHS) could be adapted for infants and young children. Design: Seventy-seven individuals participated in the study, including 63 infants and children (ages 1 month to 7 years) and 11 adults. Mechanical impedance magnitude for the forehead and temporal bone was collected for contact forces of 2, 4, and 5.4 N using an impedance head, a BAHS transducer, and a specially designed holding device. Mechanical impedance magnitude was determined across frequency using a stepped sine sweep from 100 to 10,000 Hz, and divided into low- and high-frequency sets for analysis. Results: Mechanical impedance magnitude was lowest for the youngest infants and increased throughout maturation in the low frequencies. For high frequencies, the youngest infants had the highest impedance, but only for a temporal bone placement. Impedance increased with increasing contact force for low frequencies for each age group and for both skull positions. The effect of placement was significant for high frequencies for each contact force and for each age group, except for the youngest infants. Conclusions: Our findings show that mechanical impedance properties change systematically up to 7 years old. The significant age-related differences in mechanical impedance suggest that infant–adult differences in bone conduction thresholds may be related, at least in part, to properties of the immature skull and overlying skin and tissues. These results have important implications for fitting the soft band BAHS on infants and young children. For example, verification of output force form a BAHS on a coupler designed with adult values may not be appropriate for infants. This may also hold true for transducer calibration when assessing bone conduction hearing thresholds in infants for different skull locations. The results have two additional clinical implications for fitting soft band BAHSs. First, parents should be counseled to maintain sufficient and consistent tightness so that the output from the BAHS does not change as the child moves around during everyday activities. Second, placement of a BAHS on the forehead versus the temporal bone results in changes in mechanical impedance which may contribute to a decrease in signal level at the cochlea as it has been previously demonstrated that bone conduction thresholds are poorer at the forehead compared with a temporal placement.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dy724
via IFTTT

Auditory Impairments in HIV-Infected Children

imageObjectives: In a cross-sectional study of human immunodeficiency virus (HIV)-infected adults, the authors showed lower distortion product otoacoustic emissions (DPOAEs) in HIV+ individuals compared with controls as well as findings consistent with a central auditory processing deficit in HIV+ adults on antiretroviral therapy. The authors hypothesized that HIV+ children would also have a higher prevalence of abnormal central and peripheral hearing test results compared with HIV− controls. Design: Pure-tone thresholds, DPOAEs, and tympanometry were performed on 244 subjects (131 HIV+ and 113 HIV− subjects). Thirty-five of the HIV+, and 3 of the HIV− subjects had a history of tuberculosis treatment. Gap detection results were available for 18 HIV− and 44 HIV+ children. Auditory brainstem response results were available for 72 HIV− and 72 HIV+ children. Data from ears with abnormal tympanograms were excluded. Results: HIV+ subjects were significantly more likely to have abnormal tympanograms, histories of ear drainage, tuberculosis, or dizziness. All audiometric results were compared between groups using a two-way ANOVA with HIV status and ear drainage history as grouping variables. Mean audiometric thresholds, gap detection thresholds, and auditory brainstem response latencies did not differ between groups, although the HIV+ group had a higher proportion of individuals with a hearing loss >25 dB HL in the better ear. The HIV+ group had reduced DPOAE levels (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxF3X
via IFTTT

Confirmation of PDZD7 as a Nonsyndromic Hearing Loss Gene

imageObjective: PDZD7 was identified in 2009 in a family with apparent nonsyndromic sensorineural hearing loss. However, subsequent clinical reports have associated PDZD7 with digenic Usher syndrome, the most common cause of deaf-blindness, or as a modifier of retinal disease. No further reports have validated this gene for nonsyndromic hearing loss, intuitively calling correct genotype–phenotype association into question. This report describes a validating second case for biallelic mutations in PDZD7 causing nonsyndromic mild to severe sensorineural hearing loss. It also provides detailed audiometric and ophthalmologic data excluding Usher syndrome in both the present proband (proband 1) and the first proband described in 2009 (proband 2). Design: Proband 1 was sequenced using a custom-designed next generation sequencing panel consisting of 151 deafness genes. Bioinformatics analysis and filtering disclosed two PDZD7 sequence variants (c.1648C>T, p.Q550* and c.2107del, p.S703Vfs*20). Segregation testing followed in the family. For both probands, audiograms were collected and analyzed for progressive hearing loss and detailed ophthalmic evaluations were performed including electroretinography. Results: Proband 1 demonstrated a prelingual, nonsyndromic, sensorineural hearing loss that progressed in the higher frequencies between 4 and 9 years old. PDZD7 segregation analysis confirmed biallelic inheritance (compound heterozygosity). Mutation analysis determined the c.1648C>T mutation as novel and reported the c.2107del deletion as rs397516633 with a calculated minor allele frequency of 0.000018. Clinical evaluation spanning well over a decade in proband 2 disclosed bilateral, nonprogressive hearing loss. Both probands showed healthy retinas, excluding Usher syndrome-like changes in the eye. Conclusions: PDZD7 is confirmed as a bona fide autosomal recessive nonsyndromic hearing loss gene. In both probands, there was no evidence of impaired vision or ophthalmic pathology. As the current understanding of PDZD7 mutations bridge Mendelian and complex phenotypes, the authors recommend careful variant interpretation, since PDZD7 is one of many genes associated with both Usher syndrome and autosomal recessive nonsyndromic hearing loss. Additional reports are required for understanding the complete phenotypic spectrum of this gene, including the possibility of high-frequency progression, as well as noise-induced hearing loss susceptibility in adult carriers. This report rules out all forms of Usher syndrome with an onset before 12 and 15 years old in probands 1 and 2, respectively. However, due to the young ages of the probands, this report is uninformative regarding older patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/29iydZy
via IFTTT

Effects of Negative Middle Ear Pressure on Wideband Acoustic Immittance in Normal-Hearing Adults

imageObjectives: Wideband acoustic immittance (WAI) measurements are capable of quantifying middle ear performance over a wide range of frequencies relevant to human hearing. Static pressure in the middle ear cavity affects sound transmission to the cochlea, but few datasets exist to quantify the relationship between middle ear transmission and the static pressure. In this study, WAI measurements of normal ears are analyzed in both negative middle ear pressure (NMEP) and ambient middle ear pressure (AMEP) conditions, with a focus on the effects of NMEP in individual ears. Design: Eight subjects with normal middle ear function were trained to induce consistent NMEPs, quantified by the tympanic peak pressure (TPP) and WAI. The effects of NMEP on the wideband power absorbance level are analyzed for individual ears. Complex (magnitude and phase) WAI quantities at the tympanic membrane (TM) are studied by removing the delay due to the residual ear canal (REC) volume between the probe tip and the TM. WAI results are then analyzed using a simplified classical model of the middle ear. Results: For the 8 ears presented here, NMEP has the largest and most significant effect across ears from 0.8 to 1.9 kHz, resulting in reduced power absorbance by the middle ear and cochlea. On average, NMEP causes a decrease in the power absorbance level for low- to mid-frequencies, and a small increase above about 4 kHz. The effects of NMEP on WAI quantities, including the absorbance level and TM impedance, vary considerably across ears. The complex WAI at the TM and fitted model parameters show that NMEP causes a decrease in the aggregate compliance at the TM. Estimated REC delays show little to no dependence on NMEP. Conclusions: In agreement with previous results, these data show that the power absorbance level is most sensitive to NMEP around 1 kHz. The REC effect is removed from WAI measurements, allowing for direct estimation of complex WAI at the TM. These estimates show NMEP effects consistent with an increased stiffness in the middle ear, which could originate from the TM, tensor tympani, annular ligament, or other middle ear structures. Model results quantify this nonlinear, stiffness-related change in a systematic way, that is not dependent on averaging WAI results in frequency bands. Given the variability of pressure effects, likely related to intersubject variability at AMEP, TPP is not a strong predictor of change in WAI at the TM. More data and modeling will be needed to better quantify the relationship between NMEP, WAI, and middle ear transmission.

from #Audiology via ola Kala on Inoreader http://ift.tt/29ixVSo
via IFTTT

The Use of Prosodic Cues in Sentence Processing by Prelingually Deaf Users of Cochlear Implants

imageObjectives: The purpose of this study is to assess the use of prosodic and contextual cues to focus by prelingually deaf adolescent users of cochlear implants (CIs) when identifying target phonemes. We predict that CI users will have slower reaction times to target phonemes compared with a group of normally-hearing (NH) peers. We also predict that reaction times will be faster when both prosodic and contextual (semantic) cues are provided. Design: Eight prelingually deaf adolescent users of CIs and 8 adolescents with NH completed 2 phoneme-monitoring experiments. Participants were aged between 13 and 18 years. The mean age at implantation for the CI group was 1.8 years (SD: 1.0). In the prosodic condition, reaction times to a target phoneme in a linguistically focused (i.e., stressed) word were compared between the two groups. The semantic condition compared reaction time with target phonemes when contextual cues to focus were provided in addition to prosodic cues. Results: Reaction times of the CI group were slower than those of the NH group in both the prosodic and semantic conditions. A linear mixed model was used to compare reaction times using Group as a fixed factor and Phoneme and Subject as random factors. When only prosodic cues (prosodic condition) to focus location were provided, the mean reaction time of the CI group was 512 msec compared with 317 msec for the NH group, and this difference was significant (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29ixIyR
via IFTTT

Acoustic Cue Weighting by Adults with Cochlear Implants: A Mismatch Negativity Study

imageObjectives: Formant rise time (FRT) and amplitude rise time (ART) are acoustic cues that inform phonetic identity. FRT represents the rate of transition of the formant(s) to a steady state, while ART represents the rate at which the sound reaches its peak amplitude. Normal-hearing (NH) native English speakers weight FRT more than ART during the perceptual labeling of the /ba/–/wa/ contrast. This weighting strategy is reflected neurophysiologically in the magnitude of the mismatch negativity (MMN)—MMN is larger during the FRT than the ART distinction. The present study examined the neurophysiological basis of acoustic cue weighting in adult cochlear implant (CI) listeners using the MMN design. It was hypothesized that individuals with CIs who weight ART more in behavioral labeling (ART users) would show larger MMNs during the ART than the FRT contrast, and the opposite would be seen for FRT users. Design: Electroencephalography was recorded while 20 adults with CIs listened passively to combinations of 3 synthetic speech stimuli: a /ba/ with /ba/-like FRT and ART; a /wa/ with /wa/-like FRT and ART; and a /ba/wa stimulus with /ba/-like FRT and /wa/-like ART. The MMN response was elicited during the FRT contrast by having participants passively listen to a train of /wa/ stimuli interrupted occasionally by /ba/wa stimuli, and vice versa. For the ART contrast, the same procedure was implemented using the /ba/ and /ba/wa stimuli. Results: Both ART and FRT users with CIs elicited MMNs that were equal in magnitudes during FRT and ART contrasts, with the exception that FRT users exhibited MMNs for ART and FRT contrasts that were temporally segregated. That is, their MMNs occurred significantly earlier during the ART contrast (~100 msec following sound onset) than during the FRT contrast (~200 msec). In contrast, the MMNs for ART users of both contrasts occurred later and were not significantly separable in time (~230 msec). Interestingly, this temporal segregation observed in FRT users is consistent with the MMN behavior in NH listeners. Conclusions: Results suggest that listeners with CIs who learn to classify phonemes based on formant dynamics, consistent with NH listeners, develop a strategy similar to NH listeners, in which the organization of the amplitude and spectral representations of phonemes in auditory memory are temporally segregated.

from #Audiology via ola Kala on Inoreader http://ift.tt/29ixTtM
via IFTTT

Behavioral Pure-Tone Threshold Shifts Caused by Tympanic Membrane Electrodes

imageObjective: To determine whether tympanic membrane (TM) electrodes induce behavioral pure-tone threshold shifts. Design: Pure-tone thresholds (250 to 8000 Hz) were measured twice in test (n = 18) and control (n = 10) groups. TM electrodes were placed between first and second threshold measurements in the test group, whereas the control group did not receive electrodes. Pure-tone threshold shifts were compared between groups. The effect of TM electrode contact location on threshold shifts was evaluated in the test group. Results: TM electrodes significantly increased average low-frequency thresholds, 7.5 dB at 250 Hz and 4.2 dB at 500 Hz, and shifts were as large as 25 dB in individual ears. Also, threshold shifts did not appear to vary at any frequency with TM electrode contact location. Conclusions: Low-frequency threshold shifts occur when using TM electrodes and insert earphones. These findings are relevant to interpreting electrocochleographic responses to low-frequency stimuli.

from #Audiology via ola Kala on Inoreader http://ift.tt/296U94B
via IFTTT

Impact of Hearing Aid Technology on Outcomes in Daily Life I: The Patients’ Perspective

imageObjectives: One of the challenges facing hearing care providers when recommending hearing aids is the choice of device technology level. Major manufacturers market families of hearing aids that are described as spanning the range from basic technology to premium technology. Premium technology hearing aids include acoustical processing capabilities (features) that are not found in basic technology instruments. These premium features are intended to yield improved hearing in daily life compared with basic-feature devices. However, independent research that establishes the incremental effectiveness of premium-feature devices compared with basic-feature devices is lacking. This research was designed to explore reported differences in hearing abilities for adults using premium- and basic-feature hearing aids in their daily lives. Design: This was a single-blinded, repeated, crossover trial in which the participants were blinded. All procedures were carefully controlled to limit researcher bias. Forty-five participants used carefully fitted bilateral hearing aids for 1 month and then provided data to describe the hearing improvements or deficiencies noted in daily life. Typical participants were 70 years old with mild to moderate adult-onset hearing loss bilaterally. Each participant used four pairs of hearing aids: premium- and basic-feature devices from brands marketed by each of two major manufacturers. Participants were blinded about the devices they used and about the research questions. Results: All of the outcomes were designed to capture the participant’s point of view about the benefits of the hearing aids. Three types of data were collected: change in hearing-related quality of life, extent of agreement with six positively worded statements about everyday hearing with the hearing aids, and reported preferences between the premium- and basic-feature devices from each brand as well as across all four research hearing aids combined. None of these measures yielded a statistically significant difference in outcomes between premium- and basic-feature devices. Participants did not report better outcomes with premium processing with any measure. Conclusions: It could reasonably be asserted that the patient’s perspective is the gold standard for hearing aid effectiveness. While the acoustical processing provided by premium features can potentially improve scores on tests conducted in contrived conditions in a laboratory, or on specific items in a questionnaire, this does not ensure that the processing will be of noteworthy benefit when the hearing aid is used in the real world challenges faced by the patient. If evidence suggests the patient cannot detect that premium features yield improvements over basic features in daily life, what is the responsibility of the provider in recommending hearing aid technology level? In the present research, there was no evidence to suggest that premium-feature devices yielded better outcomes than basic-feature devices from the patient’s point of view. All of the research hearing aids were substantially, but equally, helpful. Further research is needed on this topic with other hearing aids and other manufacturers. In the meantime, providers should insist on scientifically credible independent evidence to support effectiveness claims for any hearing help devices.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLzFf
via IFTTT

Development of Insertion Models Predicting Cochlear Implant Electrode Position

imageObjectives: To assess the possibility to define a preferable range for electrode array insertion depth and surgical insertion distance for which frequency mismatch is minimalized. To develop a surgical insertion guidance tool by which a preferred target angle can be attained using preoperative available anatomical data and surgically controllable insertion distance. Design: Multiplanar reconstructions of pre- and post-operative CT scans were evaluated in a population of 336 patients implanted with the CII HiFocus1 or HiFocus1J implant (26 bilaterally implantees included). Cochlear radial distances were measured on four measurement axes on the preoperative CT scan. Electrode contact positions were obtained in angular depth, distance from the round window and to the modiolus center. Frequency mismatch was calculated based on the yielded frequency as a function of the angular position per contact. Cochlear diameters were clustered into three cochlear size groups with K-sample clustering. Using spiral fitting and general linear regression modeling, the feasibility of different insertion models with cochlear size measures and surgical insertion as input parameters was analyzed. The final developed model was internally validated with bootstrapping to calculate the optimism-corrected R2. Results: Frequency mismatch was minimalized for surgical insertion of 6.7 mm and insertion depth of 484°. Cochlear size clusters were derived consisting of a “small” (N = 117), “medium” (N = 171), and “large” (N = 74) cluster with mean insertion depths of 506°, 480°, and 441°, respectively. The relation between surgical insertion (LE16) and insertion depth (θE1) differed significantly between the three clusters (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLOQC
via IFTTT

How Can Public Health Approaches and Perspectives Advance Hearing Health Care?

imageThis commentary explores the role of public health programs and themes on hearing health care. Ongoing engagement within the hearing professional community is needed to determine how to change the landscape and identify important features in the evolution of population hearing health care. Why and how to leverage existing public health programs and develop new programs to improve hearing health in older individuals is an important topic. Hearing professionals are encouraged to reflect on these themes and recommendations and join the discussion about the future of hearing science on a population level.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLh19
via IFTTT

fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants

imageObjectives: Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Design: Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLwck
via IFTTT

A Randomized Control Trial: Supplementing Hearing Aid Use with Listening and Communication Enhancement (LACE) Auditory Training

imageObjective: To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. Design: A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling—the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. Results: No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Conclusions: Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone. Potential benefits of AT may be different than those assessed by the performance and self-report measures utilized here. Individual differences not assessed in this study should be examined to evaluate whether AT with LACE has any benefits for particular individuals. Clinically, these findings suggest that audiologists may want to temper the expectations of their patients who embark on LACE training.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLKAt
via IFTTT

Testing Speech Recognition in Spanish-English Bilingual Children with the Computer-Assisted Speech Perception Assessment (CASPA): Initial Report

imageThis study evaluated the English version of Computer-Assisted Speech Perception Assessment (E-CASPA) with Spanish-English bilingual children. E-CASPA has been evaluated with monolingual English speakers ages 5 years and older, but it is unknown whether a separate norm is necessary for bilingual children. Eleven Spanish-English bilingual and 12 English monolingual children (6 to 12 years old) with normal hearing participated. Responses were scored by word, phoneme, consonant, and vowel. Regardless of scores, performance across three signal-to-noise ratio conditions was similar between groups, suggesting that the same norm can be used for both bilingual and monolingual children.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kMoxu
via IFTTT

Temporal Response Properties of the Auditory Nerve in Implanted Children with Auditory Neuropathy Spectrum Disorder and Implanted Children with Sensorineural Hearing Loss

imageObjective: This study aimed to (1) characterize temporal response properties of the auditory nerve in implanted children with auditory neuropathy spectrum disorder (ANSD), and (2) compare results recorded in implanted children with ANSD with those measured in implanted children with sensorineural hearing loss (SNHL). Design: Participants included 28 children with ANSD and 29 children with SNHL. All subjects used cochlear nucleus devices in their test ears. Both ears were tested in 6 children with ANSD and 3 children with SNHL. For all other subjects, only one ear was tested. The electrically evoked compound action potential (ECAP) was measured in response to each of the 33 pulses in a pulse train (excluding the second pulse) for one apical, one middle-array, and one basal electrode. The pulse train was presented in a monopolar-coupled stimulation mode at 4 pulse rates: 500, 900, 1800, and 2400 pulses per second. Response metrics included the averaged amplitude, latencies of response components and response width, the alternating depth and the amount of neural adaptation. These dependent variables were quantified based on the last six ECAPs or the six ECAPs occurring within a time window centered around 11 to 12 msec. A generalized linear mixed model was used to compare these dependent variables between the 2 subject groups. The slope of the linear fit of the normalized ECAP amplitudes (re. amplitude of the first ECAP response) over the duration of the pulse train was used to quantify the amount of ECAP increment over time for a subgroup of 9 subjects. Results: Pulse train-evoked ECAPs were measured in all but 8 subjects (5 with ANSD and 3 with SNHL). ECAPs measured in children with ANSD had smaller amplitude, longer averaged P2 latency and greater response width than children with SNHL. However, differences in these two groups were only observed for some electrodes. No differences in averaged N1 latency or in the alternating depth were observed between children with ANSD and children with SNHL. Neural adaptation measured in these 2 subject groups was comparable for relatively short durations of stimulation (i.e., 11 to 12 msec). Children with ANSD showed greater neural adaptation than children with SNHL for a longer duration of stimulation. Amplitudes of ECAP responses rapidly declined within the first few milliseconds of stimulation, followed by a gradual decline up to 64 msec after stimulus onset in the majority of subjects. This decline exhibited an alternating pattern at some pulse rates. Further increases in pulse rate diminished this alternating pattern. In contrast, ECAPs recorded from at least one stimulating electrode in six ears with ANSD and three ears with SNHL showed a clear increase in amplitude over the time course of stimulation. The slope of linear regression functions measured in these subjects was significantly greater than zero. Conclusions: Some but not all aspects of temporal response properties of the auditory nerve measured in this study differ between implanted children with ANSD and implanted children with SNHL. These differences are observed for some but not all electrodes. A new neural response pattern is identified. Further studies investigating its underlying mechanism and clinical relevance are warranted.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLBgk
via IFTTT

A Comparison of Alternating Polarity and Forward Masking Artifact-Reduction Methods to Resolve the Electrically Evoked Compound Action Potential

imageObjective: Cochlear implant manufacturers utilize different artifact-reduction methods to measure electrically evoked compound action potentials (ECAPs) in the clinical software. Two commercially available artifact-reduction techniques include forward masking (FwdMsk) and alternating polarity (AltPol). AltPol assumes that responses to the opposing polarities are equal, which is likely problematic. On the other hand, FwdMsk can yield inaccurate waveforms if the masker does not effectively render all neurons into a refractory state. The goal of this study was to compare ECAP thresholds, amplitudes, and slopes of the amplitude growth functions (AGFs) using FwdMsk and AltPol to determine whether the two methods yield similar results. Design: ECAP AGFs were obtained from three electrode regions (basal, middle, and apical) across 24 ears in 20 Cochlear Ltd. recipients using both FwdMsk and AltPol methods. AltPol waveforms could not be resolved for recipients of devices with the older-generation chip (CI24R(CS); N = 6). Results: Results comparing FwdMsk and AltPol in the CI24RE- and CI512-generation devices showed significant differences in threshold, AGF slope, and amplitude between methods. FwdMsk resulted in lower visual-detection thresholds (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLscE
via IFTTT

Validation of a French-Language Version of the Spatial Hearing Questionnaire, Cluster Analysis and Comparison with the Speech, Spatial, and Qualities of Hearing Scale

imageObjectives: To validate a French-language version of the spatial hearing questionnaire (SHQ), including investigating its internal structure using cluster analysis and exploring its construct validity on a large population of hearing-impaired (HI) and normal-hearing (NH) subjects, and to compare the SHQ with the speech, spatial, and qualities of hearing scale (SSQ) in the same population. Design: The SHQ was translated in accordance with the principles of the Universalist Model of cross-cultural adaptation of patient-reported outcome instruments. The SSQ and SHQ were then presented in a counterbalanced order, in a self-report mode, in a population of 230 HI subjects (mean age = 54 years and pure-tone audiometry [PTA] on the better ear = 28 dB HL) and 100 NH subjects (mean age = 21 years). The SHQ feasibility, readability, and psychometric properties were systematically investigated using reliability indices, cluster, and factor analyses and multiregression analyses. SHQ characteristics were compared both to different literature data obtained with different language versions and to the SSQ scores obtained in the same population. Results: Internal validity was high and very good reproducibility of scores and intersubject variability were obtained across the 24 items between the English and French SHQ for NH subjects. Factor and cluster analyses concurred in identifying five correlated factors, corresponding to several SHQ subscales: (1) speech in noise (corresponding to SHQ subscales 7 and 8), (2) localization of voice sounds from behind, (3) speech in quiet (corresponding to SHQ subscale 1), (4) localization of everyday sounds, and (5) localization of voices and music (corresponding to parts of the SHQ localization subscale). Correlations between SSQ subscales and SHQ factors identified the greatest correlations between SHQ factors 2, 4, and 5 and SSQ spatial subscales, whereas SHQ factor 1 had the greatest correlation with SSQ_speech. SHQ and SSQ scores were similar, whether in NH subjects (8.5 versus 8.4) or in HI subjects (6.6 for both), sharing more than 80% of variance. The SHQ localization subscale gave similar scores as the SSQ spatial subscale, sharing more than 75% of variance. Construct validity identified better ear PTA and PTA asymmetry as the two main predictors of SHQ scores, to a degree similar to that seen for the SSQ. The SHQ was shorter, easier to read and less sensitive to the number of years of formal education than the SSQ, but this came at a cost of ecological validity, which was rated higher for the SSQ than for the SHQ. Conclusions: A comparison of factor analysis outcomes among the English, Dutch, and French versions of the SHQ confirmed good conceptual equivalence across languages and robustness of the SHQ for use in international settings. In addition, SHQ and SSQ scores showed remarkable similarities, suggesting the possibility of extrapolating the results from one questionnaire to the other. Although the SHQ was originally designed in a population of cochlear implant patients, the present results show that its usefulness could easily be extended to noncochlear-implanted, HI subjects.

from #Audiology via ola Kala on Inoreader http://ift.tt/29kLO2W
via IFTTT

To Ear and Hearing Peer Reviewers: Thank You

No abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxCFs
via IFTTT

Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners’ Recognition of English Sentences

imageObjective: Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer’s j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Design: Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Results: Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN listeners significantly more than all three non-native groups of listeners. Conclusions: Language background influenced the use and weighting of semantic and syntactic cues in a complex manner. A native language advantage existed in the effective use of both cues combined. A language-dominance effect was seen in the use of semantics. No first-language effect was present for the use of either or both cues. For all non-native listeners, syntax contributed significantly more to sentence recognition than semantics, possibly due to the fact that semantics develops more gradually than syntax in second-language acquisition. The present study provides evidence that Boothroyd and Nittrouer’s j and k factors can be successfully used to quantify the effectiveness of contextual cue use in clinically relevant, linguistically diverse populations.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxXrq
via IFTTT

Effects of Modified Hearing Aid Fittings on Loudness and Tone Quality for Different Acoustic Scenes

imageObjective: To compare loudness and tone-quality ratings for sounds processed via a simulated five-channel compression hearing aid fitted using NAL-NL2 or using a modification of the fitting designed to be appropriate for the type of listening situation: speech in quiet, speech in noise, music, and noise alone. Design: Ratings of loudness and tone quality were obtained for stimuli presented via a loudspeaker in front of the participant. For normal-hearing participants, levels of 50, 65, and 80 dB SPL were used. For hearing-impaired participants, the stimuli were processed via a simulated hearing aid with five-channel fast-acting compression fitted using NAL-NL2 or using a modified fitting. Input levels to the simulated hearing aid were 50, 65, and 80 dB SPL. All participants listened with one ear plugged. For speech in quiet, the modified fitting was based on the CAM2B method. For speech in noise, the modified fitting used slightly (0 to 2 dB) decreased gains at low frequencies. For music, the modified fitting used increased gains (by 5 to 14 dB) at low frequencies. For noise alone, the modified fitting used decreased gains at all frequencies (by a mean of 1 dB at low frequencies increasing to 8 dB at high frequencies). Results: For speech in quiet, ratings of loudness with the NAL-NL2 fitting were slightly lower than the mean ratings for normal-hearing participants for all levels, while ratings with CAM2B were close to normal for the two lower levels, and slightly greater than normal for the highest level. Ratings of tone quality were close to the optimum value (“just right”) for both fittings, except that the CAM2B fitting was rated as very slightly boomy for the 80-dB SPL level. For speech in noise, the ratings of loudness were very close to the normal values and the ratings of tone quality were close to the optimal value for both fittings and for all levels. For music, the ratings of loudness were close to the normal values for NAL-NL2 and slightly above normal for the modified fitting. The tone quality was rated as very slightly tinny for NAL-NL2 and very slightly boomy for the modified fitting. For noise alone, the NAL-NL2 fitting was rated as slightly louder than normal for all levels, while the modified fitting was rated as close to normal. Tone quality was rated as slightly sharper for the NAL-NL2 fitting than for the modified fitting. Conclusions: Loudness and tone quality can sometimes be made slightly closer to “normal” by modifying gains for different listening situations. The modification for music required to achieve “normal” tone quality appears to be less than used in this study.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxTbk
via IFTTT

Age-Related Changes in Binaural Interaction at Brainstem Level

imageObjectives: Age-related hearing loss hampers the ability to understand speech in adverse listening conditions. This is attributed to a complex interaction of changes in the peripheral and central auditory system. One aspect that may deteriorate across the lifespan is binaural interaction. The present study investigates binaural interaction at the level of the auditory brainstem. It is hypothesized that brainstem binaural interaction deteriorates with advancing age. Design: Forty-two subjects of various age participated in the study. Auditory brainstem responses (ABRs) were recorded using clicks and 500 Hz tone-bursts. ABRs were elicited by monaural right, monaural left, and binaural stimulation. Binaural interaction was investigated in two ways. First, grand averages of the binaural interaction component were computed for each age group. Second, wave V characteristics of the binaural ABR were compared with those of the summed left and right ABRs. Results: Binaural interaction in the click ABR was demonstrated by shorter latencies and smaller amplitudes in the binaural compared with the summed monaural responses. For 500 Hz tone-burst ABR, no latency differences were found. However, amplitudes were significantly smaller in the binaural than summed monaural condition. An age-effect was found for 500 Hz tone-burst, but not for click ABR. Conclusions: Brainstem binaural interaction seems to decline with age. Interestingly, these changes seem to be stimulus-dependent.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxBRU
via IFTTT

Maturation of Mechanical Impedance of the Skin-Covered Skull: Implications for Soft Band Bone-Anchored Hearing Systems Fitted in Infants and Young Children

imageObjectives: Little is known about the maturational changes in the mechanical properties of the skull and how they might contribute to infant–adult differences in bone conduction hearing sensitivity. The objective of this study was to investigate the mechanical impedance of the skin-covered skull for different skull positions and contact forces for groups of infants, young children, and adults. These findings provide a better understanding of how changes in mechanical impedance might contribute to developmental changes in bone conduction hearing, and might provide insight into how fitting and output verification protocols for bone-anchored hearing systems (BAHS) could be adapted for infants and young children. Design: Seventy-seven individuals participated in the study, including 63 infants and children (ages 1 month to 7 years) and 11 adults. Mechanical impedance magnitude for the forehead and temporal bone was collected for contact forces of 2, 4, and 5.4 N using an impedance head, a BAHS transducer, and a specially designed holding device. Mechanical impedance magnitude was determined across frequency using a stepped sine sweep from 100 to 10,000 Hz, and divided into low- and high-frequency sets for analysis. Results: Mechanical impedance magnitude was lowest for the youngest infants and increased throughout maturation in the low frequencies. For high frequencies, the youngest infants had the highest impedance, but only for a temporal bone placement. Impedance increased with increasing contact force for low frequencies for each age group and for both skull positions. The effect of placement was significant for high frequencies for each contact force and for each age group, except for the youngest infants. Conclusions: Our findings show that mechanical impedance properties change systematically up to 7 years old. The significant age-related differences in mechanical impedance suggest that infant–adult differences in bone conduction thresholds may be related, at least in part, to properties of the immature skull and overlying skin and tissues. These results have important implications for fitting the soft band BAHS on infants and young children. For example, verification of output force form a BAHS on a coupler designed with adult values may not be appropriate for infants. This may also hold true for transducer calibration when assessing bone conduction hearing thresholds in infants for different skull locations. The results have two additional clinical implications for fitting soft band BAHSs. First, parents should be counseled to maintain sufficient and consistent tightness so that the output from the BAHS does not change as the child moves around during everyday activities. Second, placement of a BAHS on the forehead versus the temporal bone results in changes in mechanical impedance which may contribute to a decrease in signal level at the cochlea as it has been previously demonstrated that bone conduction thresholds are poorer at the forehead compared with a temporal placement.

from #Audiology via ola Kala on Inoreader http://ift.tt/29dy724
via IFTTT

Auditory Impairments in HIV-Infected Children

imageObjectives: In a cross-sectional study of human immunodeficiency virus (HIV)-infected adults, the authors showed lower distortion product otoacoustic emissions (DPOAEs) in HIV+ individuals compared with controls as well as findings consistent with a central auditory processing deficit in HIV+ adults on antiretroviral therapy. The authors hypothesized that HIV+ children would also have a higher prevalence of abnormal central and peripheral hearing test results compared with HIV− controls. Design: Pure-tone thresholds, DPOAEs, and tympanometry were performed on 244 subjects (131 HIV+ and 113 HIV− subjects). Thirty-five of the HIV+, and 3 of the HIV− subjects had a history of tuberculosis treatment. Gap detection results were available for 18 HIV− and 44 HIV+ children. Auditory brainstem response results were available for 72 HIV− and 72 HIV+ children. Data from ears with abnormal tympanograms were excluded. Results: HIV+ subjects were significantly more likely to have abnormal tympanograms, histories of ear drainage, tuberculosis, or dizziness. All audiometric results were compared between groups using a two-way ANOVA with HIV status and ear drainage history as grouping variables. Mean audiometric thresholds, gap detection thresholds, and auditory brainstem response latencies did not differ between groups, although the HIV+ group had a higher proportion of individuals with a hearing loss >25 dB HL in the better ear. The HIV+ group had reduced DPOAE levels (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29dxF3X
via IFTTT

Confirmation of PDZD7 as a Nonsyndromic Hearing Loss Gene

imageObjective: PDZD7 was identified in 2009 in a family with apparent nonsyndromic sensorineural hearing loss. However, subsequent clinical reports have associated PDZD7 with digenic Usher syndrome, the most common cause of deaf-blindness, or as a modifier of retinal disease. No further reports have validated this gene for nonsyndromic hearing loss, intuitively calling correct genotype–phenotype association into question. This report describes a validating second case for biallelic mutations in PDZD7 causing nonsyndromic mild to severe sensorineural hearing loss. It also provides detailed audiometric and ophthalmologic data excluding Usher syndrome in both the present proband (proband 1) and the first proband described in 2009 (proband 2). Design: Proband 1 was sequenced using a custom-designed next generation sequencing panel consisting of 151 deafness genes. Bioinformatics analysis and filtering disclosed two PDZD7 sequence variants (c.1648C>T, p.Q550* and c.2107del, p.S703Vfs*20). Segregation testing followed in the family. For both probands, audiograms were collected and analyzed for progressive hearing loss and detailed ophthalmic evaluations were performed including electroretinography. Results: Proband 1 demonstrated a prelingual, nonsyndromic, sensorineural hearing loss that progressed in the higher frequencies between 4 and 9 years old. PDZD7 segregation analysis confirmed biallelic inheritance (compound heterozygosity). Mutation analysis determined the c.1648C>T mutation as novel and reported the c.2107del deletion as rs397516633 with a calculated minor allele frequency of 0.000018. Clinical evaluation spanning well over a decade in proband 2 disclosed bilateral, nonprogressive hearing loss. Both probands showed healthy retinas, excluding Usher syndrome-like changes in the eye. Conclusions: PDZD7 is confirmed as a bona fide autosomal recessive nonsyndromic hearing loss gene. In both probands, there was no evidence of impaired vision or ophthalmic pathology. As the current understanding of PDZD7 mutations bridge Mendelian and complex phenotypes, the authors recommend careful variant interpretation, since PDZD7 is one of many genes associated with both Usher syndrome and autosomal recessive nonsyndromic hearing loss. Additional reports are required for understanding the complete phenotypic spectrum of this gene, including the possibility of high-frequency progression, as well as noise-induced hearing loss susceptibility in adult carriers. This report rules out all forms of Usher syndrome with an onset before 12 and 15 years old in probands 1 and 2, respectively. However, due to the young ages of the probands, this report is uninformative regarding older patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/29iydZy
via IFTTT

Effects of Negative Middle Ear Pressure on Wideband Acoustic Immittance in Normal-Hearing Adults

imageObjectives: Wideband acoustic immittance (WAI) measurements are capable of quantifying middle ear performance over a wide range of frequencies relevant to human hearing. Static pressure in the middle ear cavity affects sound transmission to the cochlea, but few datasets exist to quantify the relationship between middle ear transmission and the static pressure. In this study, WAI measurements of normal ears are analyzed in both negative middle ear pressure (NMEP) and ambient middle ear pressure (AMEP) conditions, with a focus on the effects of NMEP in individual ears. Design: Eight subjects with normal middle ear function were trained to induce consistent NMEPs, quantified by the tympanic peak pressure (TPP) and WAI. The effects of NMEP on the wideband power absorbance level are analyzed for individual ears. Complex (magnitude and phase) WAI quantities at the tympanic membrane (TM) are studied by removing the delay due to the residual ear canal (REC) volume between the probe tip and the TM. WAI results are then analyzed using a simplified classical model of the middle ear. Results: For the 8 ears presented here, NMEP has the largest and most significant effect across ears from 0.8 to 1.9 kHz, resulting in reduced power absorbance by the middle ear and cochlea. On average, NMEP causes a decrease in the power absorbance level for low- to mid-frequencies, and a small increase above about 4 kHz. The effects of NMEP on WAI quantities, including the absorbance level and TM impedance, vary considerably across ears. The complex WAI at the TM and fitted model parameters show that NMEP causes a decrease in the aggregate compliance at the TM. Estimated REC delays show little to no dependence on NMEP. Conclusions: In agreement with previous results, these data show that the power absorbance level is most sensitive to NMEP around 1 kHz. The REC effect is removed from WAI measurements, allowing for direct estimation of complex WAI at the TM. These estimates show NMEP effects consistent with an increased stiffness in the middle ear, which could originate from the TM, tensor tympani, annular ligament, or other middle ear structures. Model results quantify this nonlinear, stiffness-related change in a systematic way, that is not dependent on averaging WAI results in frequency bands. Given the variability of pressure effects, likely related to intersubject variability at AMEP, TPP is not a strong predictor of change in WAI at the TM. More data and modeling will be needed to better quantify the relationship between NMEP, WAI, and middle ear transmission.

from #Audiology via ola Kala on Inoreader http://ift.tt/29ixVSo
via IFTTT

The Use of Prosodic Cues in Sentence Processing by Prelingually Deaf Users of Cochlear Implants

imageObjectives: The purpose of this study is to assess the use of prosodic and contextual cues to focus by prelingually deaf adolescent users of cochlear implants (CIs) when identifying target phonemes. We predict that CI users will have slower reaction times to target phonemes compared with a group of normally-hearing (NH) peers. We also predict that reaction times will be faster when both prosodic and contextual (semantic) cues are provided. Design: Eight prelingually deaf adolescent users of CIs and 8 adolescents with NH completed 2 phoneme-monitoring experiments. Participants were aged between 13 and 18 years. The mean age at implantation for the CI group was 1.8 years (SD: 1.0). In the prosodic condition, reaction times to a target phoneme in a linguistically focused (i.e., stressed) word were compared between the two groups. The semantic condition compared reaction time with target phonemes when contextual cues to focus were provided in addition to prosodic cues. Results: Reaction times of the CI group were slower than those of the NH group in both the prosodic and semantic conditions. A linear mixed model was used to compare reaction times using Group as a fixed factor and Phoneme and Subject as random factors. When only prosodic cues (prosodic condition) to focus location were provided, the mean reaction time of the CI group was 512 msec compared with 317 msec for the NH group, and this difference was significant (p

from #Audiology via ola Kala on Inoreader http://ift.tt/29ixIyR
via IFTTT

Acoustic Cue Weighting by Adults with Cochlear Implants: A Mismatch Negativity Study

imageObjectives: Formant rise time (FRT) and amplitude rise time (ART) are acoustic cues that inform phonetic identity. FRT represents the rate of transition of the formant(s) to a steady state, while ART represents the rate at which the sound reaches its peak amplitude. Normal-hearing (NH) native English speakers weight FRT more than ART during the perceptual labeling of the /ba/–/wa/ contrast. This weighting strategy is reflected neurophysiologically in the magnitude of the mismatch negativity (MMN)—MMN is larger during the FRT than the ART distinction. The present study examined the neurophysiological basis of acoustic cue weighting in adult cochlear implant (CI) listeners using the MMN design. It was hypothesized that individuals with CIs who weight ART more in behavioral labeling (ART users) would show larger MMNs during the ART than the FRT contrast, and the opposite would be seen for FRT users. Design: Electroencephalography was recorded while 20 adults with CIs listened passively to combinations of 3 synthetic speech stimuli: a /ba/ with /ba/-like FRT and ART; a /wa/ with /wa/-like FRT and ART; and a /ba/wa stimulus with /ba/-like FRT and /wa/-like ART. The MMN response was elicited during the FRT contrast by having participants passively listen to a train of /wa/ stimuli interrupted occasionally by /ba/wa stimuli, and vice versa. For the ART contrast, the same procedure was implemented using the /ba/ and /ba/wa stimuli. Results: Both ART and FRT users with CIs elicited MMNs that were equal in magnitudes during FRT and ART contrasts, with the exception that FRT users exhibited MMNs for ART and FRT contrasts that were temporally segregated. That is, their MMNs occurred significantly earlier during the ART contrast (~100 msec following sound onset) than during the FRT contrast (~200 msec). In contrast, the MMNs for ART users of both contrasts occurred later and were not significantly separable in time (~230 msec). Interestingly, this temporal segregation observed in FRT users is consistent with the MMN behavior in NH listeners. Conclusions: Results suggest that listeners with CIs who learn to classify phonemes based on formant dynamics, consistent with NH listeners, develop a strategy similar to NH listeners, in which the organization of the amplitude and spectral representations of phonemes in auditory memory are temporally segregated.

from #Audiology via ola Kala on Inoreader http://ift.tt/29ixTtM
via IFTTT