Παρασκευή 26 Αυγούστου 2016

The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment

imageObjective: The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. Design: In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Results: Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. Conclusions: These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu33Mu
via IFTTT

Association Between Osteoporosis/Osteopenia and Vestibular Dysfunction in South Korean Adults

imageObjective: The associations of osteoporosis/osteopenia with vestibular dysfunction have not been well evaluated and conflicting results have been reported. The purpose of this study is to examine the relation of low bone mineral density (BMD) with vestibular dysfunction. Design: The authors conducted a cross-sectional study in 3579 Korean adults aged 50 years and older who participated in the 2009 to 2010 Korea National Health and Nutrition Examination Survey. BMD was measured by dual energy X ray absorptiometry. Vestibular dysfunction was evaluated using the modified Romberg test of standing balance on firm and compliant support surfaces. Data were analyzed in 2015. Multiple logistic regression analysis was used to compute odds ratios (ORs) and 95% confidence intervals (CIs). Results: The prevalence of vestibular dysfunction was 4.3 ± 0.5%. After adjustment for potential confounders, the adjusted ORs for vestibular dysfunction based on BMD were 1.00 (reference) for normal BMD, 2.21 (95% CI: 1.08, 4.50) for osteopenia, and 2.47 (95% CI: 1.05, 5.81) for osteoporosis (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0D0F
via IFTTT

Reflectance Measures from Infant Ears With Normal Hearing and Transient Conductive Hearing Loss

imageObjective: The objective is to develop methods to utilize newborn reflectance measures for the identification of middle-ear transient conditions (e.g., middle-ear fluid) during the newborn period and ultimately during the first few months of life. Transient middle-ear conditions are a suspected source of failure to pass a newborn hearing screening. The ability to identify a conductive loss during the screening procedure could enable the referred ear to be either (1) cleared of a middle-ear condition and recommended for more extensive hearing assessment as soon as possible, or (2) suspected of a transient middle-ear condition, and if desired, be rescreened before more extensive hearing assessment. Design: Reflectance measurements are reported from full-term, healthy, newborn babies in which one ear referred and one ear passed an initial auditory brainstem response newborn hearing screening and a subsequent distortion product otoacoustic emission screening on the same day. These same subjects returned for a detailed follow-up evaluation at age 1 month (range 14 to 35 days). In total, measurements were made on 30 subjects who had a unilateral refer near birth (during their first 2 days of life) and bilateral normal hearing at follow-up (about 1 month old). Three specific comparisons were made: (1) Association of ear’s state with power reflectance near birth (referred versus passed ear), (2) Changes in power reflectance of normal ears between newborn and 1 month old (maturation effects), and (3) Association of ear’s newborn state (referred versus passed) with ear’s power reflectance at 1 month. In addition to these measurements, a set of preliminary data selection criteria were developed to ensure that analyzed data were not corrupted by acoustic leaks and other measurement problems. Results: Within 2 days of birth, the power reflectance measured in newborn ears with transient middle-ear conditions (referred newborn hearing screening and passed hearing assessment at age 1 month) was significantly greater than power reflectance on newborn ears that passed the newborn hearing screening across all frequencies (500 to 6000 Hz). Changes in power reflectance in normal ears from newborn to 1 month appear in approximately the 2000 to 5000 Hz range but are not present at other frequencies. The power reflectance at age 1 month does not depend significantly on the ear’s state near birth (refer or pass hearing screening) for frequencies above 700 Hz; there might be small differences at lower frequencies. Conclusions: Power reflectance measurements are significantly different for ears that pass newborn hearing screening and ears that refer with middle-ear transient conditions. At age 1 month, about 90% of ears that referred at birth passed an auditory brainstem response hearing evaluation; within these ears the power reflectance at 1 month did not differ between the ear that initially referred at birth and the ear that passed the hearing screening at birth for frequencies above 700 Hz. This study also proposes a preliminary set of criteria for determining when reflectance measures on young babies are corrupted by acoustic leaks, probes against the ear canal, or other measurement problems. Specifically proposed are “data selection criteria” that depend on the power reflectance, impedance magnitude, and impedance angle. Additional data collected in the future are needed to improve and test these proposed criteria.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1UVn
via IFTTT

The Physiological Basis and Clinical Use of the Binaural Interaction Component of the Auditory Brainstem Response

imageThe auditory brainstem response (ABR) is a sound-evoked noninvasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, the authors discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. The authors review how interaural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu31Em
via IFTTT

A Novel Algorithm to Derive Spread of Excitation Based on Deconvolution

imageObjective: The width of the spread of excitation (SOE) curve has been widely thought to represent an estimate of SOE. Therefore, correlates between psychophysical parameters, such as pitch discrimination and speech perception, and the width of SOE curves, have long been investigated. However, to date, no relationships between these objective and subjective measurements have yet been determined. In a departure from the current thinking, the authors now propose that the SOE curve, recorded with forward masking, is the equivalent of a convolution operation. As such, deconvolution would be expected to retrieve the excitation areas attributable to either masker or probe, potentially more closely revealing the actual neural SOE. This study aimed to develop a new analytical tool with which to derive SOE using this principle. Design: Intraoperative SOE curve measurements of 16 subjects, implanted with an Advanced Bionics implant, were analyzed. Evoked compound action potential (ECAP)-based SOE curves were recorded on electrodes 3 to 16, using the forward masker paradigm, with variable masker. The measured SOE curves were then compared with predicted SOE curves, built by the convolution of basic excitation density profiles (EDPs). Predicted SOE curves were fitted to the measured SOEs by iterative adjustment of the EDPs for the masker and the probe. Results: It was possible to generate a good fit between the predicted and measured SOE curves, inclusive of their asymmetry. The rectangular EDP was of least value in terms of its ability to generate a good fit; smoother SOE curves were modeled using the exponential or Gaussian EDPs. In most subjects, the EDP width (i.e., the size of the excitation area) gradually changed from wide at the apex of the electrode array, to narrow at the base. A comparison of EDP widths to SOE curve widths, as calculated in the literature, revealed that the EDPs now provide a measure of the SOE that is qualitatively distinct from that provided using conventional methods. Conclusions: This study shows that an eCAP-based SOE curve, measured with forward masking, can be treated as a convolution of EDPs for masker and probe. The poor fit achieved for the measured and modeled data using the rectangular EDP, emphasizes the requirement for a sloping excitation area to mimic actual SOE recordings. Our deconvolution method provides an explanation for the frequently observed asymmetry of SOE curves measured along the electrode array, as this is a consequence of a wider excitation area in the apical part of the cochlea, in the absence of any asymmetry in the actual EDP. In addition, broader apical EDPs underlie the higher eCAP amplitudes found for apical stimulation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1heA
via IFTTT

Intelligibility of the Patient’s Speech Predicts the Likelihood of Cochlear Implant Success in Prelingually Deaf Adults

imageObjectives: The objective of this study was to determine the validity and clinical applicability of intelligibility of the patient’s own speech, measured via a Vowel Identification Test (VOW), as a predictor of speech perception for prelingually deafened adults after 1 year of cochlear implant use. Specifically, the objective was to investigate the probability that a prelingually deaf patient, given a VOW score above (or below) a chosen cutoff point, reaches a postimplant speech perception score above (or below) a critical value. High predictive values for VOW could support preimplant counseling and implant candidacy decisions in individual patients. Design: One hundred and fifty-two adult cochlear implant candidates with prelingual hearing impairment or deafness took part as speakers in a VOW; 149 speakers completed the test successfully. Recordings of the speech stimuli, consisting of nonsense words of the form [h]-V-[t], where V represents one of 15 vowels/diphthongs ([ ]), were presented to two normal-hearing listeners. VOW score was expressed as the percentage of vowels identified correctly (averaged over the 2 listeners). Subsequently, the 149 participants enrolled in the cochlear implant selection procedure. Extremely poor speakers were excluded from implantation, as well as patients who did not meet regular selection criteria as developed for postlingually deafened patients. From the 149 participants, 92 were selected for implantation. For the implanted group, speech perception data were collected at 1-year postimplantation. Results: Speech perception score at 1-year postimplantation (available for 77 of the 92 implanted participants) correlated positively with preimplant intelligibility of the patient’s speech, as represented by VOW (r = 0.79, p

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0uKn
via IFTTT

Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech

imageObjectives: Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. Design: Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners’ ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. Results: Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. Conclusions: Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners’ ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0oCJ
via IFTTT

Human Envelope Following Responses to Amplitude Modulation: Effects of Aging and Modulation Depth

imageObjective: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. Design: Participants: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group (“O1”; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group (“O2”; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. Results: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. Conclusions: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0CK9
via IFTTT

Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing

imageObjectives: This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Design: Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Results: Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. Conclusions: The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1tdK
via IFTTT

The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users

imageObjective: This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Design: Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. Results: The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. Conclusions: The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1nmz
via IFTTT

Test-Retest Reliability of the Binaural Interaction Component of the Auditory Brainstem Response

imageObjectives: The binaural interaction component (BIC) is the residual auditory brainstem response (ABR) obtained after subtracting the sum of monaurally evoked from binaurally evoked ABRs. The DN1 peak—the first negative peak of the BIC—has been postulated to have diagnostic value as a biomarker for binaural hearing abilities. Indeed, not only do DN1 amplitudes depend systematically upon binaural cues to location (interaural time and level differences), but they are also predictive of central hearing deficits in humans. A prominent issue in using BIC measures as a diagnostic biomarker is that DN1 amplitudes not only exhibit considerable variability across subjects, but also within subjects across different measurement sessions. Design: In this study, the authors investigate the DN1 amplitude measurement reliability by conducting repeated measurements on different days in eight adult guinea pigs. Results: Despite consistent ABR thresholds, ABR and DN1 amplitudes varied between and within subjects across recording sessions. However, the study analysis reveals that DN1 amplitudes varied proportionally with parent monaural ABR amplitudes, suggesting that common experimental factors likely account for the variability in both waveforms. Despite this variability, the authors show that the shape of the dependence between DN1 amplitude and interaural time difference is preserved. The authors then provide a BIC normalization strategy using monaural ABR amplitude that reduces the variability of DN1 peak measurements. Finally, the authors evaluate this normalization strategy in the context of detecting changes of the DN1 amplitude-to-interaural time difference relationship. Conclusions: The study results indicate that the BIC measurement variability can be reduced by a factor of two by performing a simple and objective normalization operation. The authors discuss the potential for this normalized BIC measure as a biomarker for binaural hearing.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0wCl
via IFTTT

Changes in the Compressive Nonlinearity of the Cochlea During Early Aging: Estimates From Distortion OAE Input/Output Functions

imageObjectives: The level-dependent growth of distortion product otoacoustic emissions (DPOAEs) provides an indirect metric of cochlear compressive nonlinearity. Recent evidence suggests that aging reduces nonlinear distortion emissions more than those associated with linear reflection. Therefore, in this study, we generate input/output (I/O) functions from the isolated distortion component of the DPOAE to probe the effects of early aging on the compressive nonlinearity of the cochlea. Design: Thirty adults whose ages ranged from 18 to 64 years participated in this study, forming a continuum of young to middle-age subjects. When necessary for analyses, subjects were divided into a young-adult group with a mean age of 21 years, and a middle-aged group with a mean age of 52 years. All young-adult subjects and 11 of the middle-aged subjects had normal hearing; 4 middle-aged ears had slight audiometric threshold elevation at mid-to-high frequencies. DPOAEs (2f1 − f2) were recorded using primary tones swept upward in frequency from 0.5 to 8 kHz, and varied from 25 to 80 dB sound pressure level. The nonlinear distortion component of the total DPOAE was separated and used to create I/O functions at one-half octave intervals from 1.3 to 7.4 kHz. Four features of OAE compression were extracted from a fit to these functions: compression threshold, range of compression, compression slope, and low-level growth. These values were compared between age groups and correlational analyses were conducted between OAE compression threshold and age with audiometric threshold controlled. Results: Older ears had reduced DPOAE amplitude compared with young-adult ears. The OAE compression threshold was elevated at test frequencies above 2 kHz in the middle-aged subjects by 19 dB (35 versus 54 dB SPL), thereby reducing the compression range. In addition, middle-aged ears showed steeper amplitude growth beyond the compression threshold. Audiometric threshold was initially found to be a confound in establishing the relationship between compression and age; however, statistical analyses allowed us to control its variance. Correlations performed while controlling for age differences in high-frequency audiometric thresholds showed significant relationships between the DPOAE I/O compression threshold and age: Older subjects tended to have elevated compression thresholds compared with younger subjects and an extended range of monotonic growth. Conclusions: Cochlear manifestations of nonlinearity, such as the DPOAE, weaken during early aging, and DPOAE I/O functions become linearized. Commensurate changes in high-frequency audiometric thresholds are not sufficient to fully explain these changes. The results suggest that age-related changes in compressive nonlinearity could produce a reduced dynamic range of hearing, and contribute to perceptual difficulties in older listeners.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu15Mf
via IFTTT

Using the Digits-In-Noise Test to Estimate Age-Related Hearing Loss

imageObjective: Age-related hearing loss is common in the elderly population. Timely detection and targeted counseling can lead to adequate treatment with hearing aids. The Digits-In-Noise (DIN) test was developed as a relatively simple test to assess hearing acuity. It is a potentially powerful test for the screening of large populations, including the elderly. However, until to date, no sensitivity or specificity rates for detecting hearing loss were reported in a general elderly population. The purpose of this study was to evaluate the ability of the DIN test to screen for mild and moderate hearing loss in the elderly. Design: Data of pure-tone audiometry and the DIN test were collected from 3327 adults ages above 50 (mean: 65), as part of the Rotterdam Study, a large population-based cohort study. Sensitivity and specificity of the DIN test for detecting hearing loss were calculated by comparing speech reception threshold (SRT) with pure-tone average threshold at 0.5, 1, 2, and 4 kHz (PTA0.5,1,2,4). Receiver operating characteristics were calculated for detecting >20 and >35 dB HL average hearing loss at the best ear. Results: Hearing loss varied greatly between subjects and, as expected, increased with age. High frequencies and men were more severely affected. A strong correlation (R = 0.80, p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1pdT
via IFTTT

Comparing the Accuracy and Speed of Manual and Tracking Methods of Measuring Hearing Thresholds

imageObjectives: The reliability of hearing thresholds obtained using the standard clinical method (modified Hughson-Westlake) has been the focus of previous investigation given the potential for tester bias (Margolis et al., 2015). In recent years, more precise methods in laboratory studies have been used that control for sources of bias, often at the expense of longer test times. The aim of this pilot study was to compare test-retest variability and time requirement to obtain a full set of hearing thresholds (0.125 – 20 kHz) of the clinical modified Hughson-Westlake (manual) method with that of the automated, modified (single frequency) Békésy tracking method (Lee et al., 2012). Design: Hearing thresholds from 10 subjects (8 female) between 19 to 47 years old (mean = 28.3; SD = 9.4) were measured using two methods with identical test hardware and calibration. Thresholds were obtained using the modified Hughson-Westlake (manual) method and the Békésy method (tracking). Measurements using each method were repeated after one-week. Test-retest variability within each measurement method was computed across test sessions. Results from each test method as well as test time across methods were compared. Results: Test-retest variability was comparable and statistically indistinguishable between the two test methods. Thresholds were approximately 5 dB lower when measured using the tracking method. This difference was not statistically significant. The manual method of measuring thresholds was faster by approximately 4 minutes. Both methods required less time (~ 2 mins) in the second session as compared to the first. Conclusion: Hearing thresholds obtained using the manual method can be just as reliable as those obtained using the tracking method over the large frequency range explored here (0.125 – 20 kHz). These results perhaps point to the importance of equivalent and valid calibration techniques that can overcome frequency dependent discrepancies, most prominent at higher frequencies, in the sound pressure delivered to the ear.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0NVJ
via IFTTT

Neural Correlates of Phonetic Learning in Postlingually Deafened Cochlear Implant Listeners

imageObjective: The present training study aimed to examine the fine-scale behavioral and neural correlates of phonetic learning in adult postlingually deafened cochlear implant (CI) listeners. The study investigated whether high variability identification training improved phonetic categorization of the /ba/–/da/ and /wa/–/ja/ speech contrasts and whether any training-related improvements in phonetic perception were correlated with neural markers associated with phonetic learning. It was hypothesized that training would sharpen phonetic boundaries for the speech contrasts and that changes in behavioral sensitivity would be associated with enhanced mismatch negativity (MMN) responses to stimuli that cross a phonetic boundary relative to MMN responses evoked using stimuli from the same phonetic category. Design: A computer-based training program was developed that featured multitalker variability and adaptive listening. The program was designed to help CI listeners attend to the important second formant transition cue that categorizes the /ba/–/da/ and /wa/–/ja/ contrasts. Nine adult CI listeners completed the training and 4 additional CI listeners that did not undergo training were included to assess effects of procedural learning. Behavioral pre-post tests consisted of identification and discrimination of the synthetic /ba/–/da/ and /wa/–/ja/ speech continua. The electrophysiologic MMN response elicited by an across phoneme category pair and a within phoneme category pair that differed by an acoustically equivalent amount was derived at pre-post test intervals for each speech contrast as well. Results: Training significantly enhanced behavioral sensitivity across the phonetic boundary and significantly altered labeling of the stimuli along the /ba/–/da/ continuum. While training only slightly altered identification and discrimination of the /wa/–/ja/ continuum, trained CI listeners categorized the /wa/–/ja/ contrast more efficiently than the /ba/–/da/ contrast across pre-post test sessions. Consistent with behavioral results, pre-post EEG measures showed the MMN amplitude to the across phoneme category pair significantly increased with training for both the /ba/–/da/ and /wa/–/ja/ contrasts, but the MMN was unchanged with training for the corresponding within phoneme category pairs. Significant brain–behavior correlations were observed between changes in the MMN amplitude evoked by across category phoneme stimuli and changes in the slope of identification functions for the trained listeners for both speech contrasts. Conclusions: The brain and behavior data of the present study provide evidence that substantial neural plasticity for phonetic learning in adult postlingually deafened CI listeners can be induced by high variability identification training. These findings have potential clinical implications related to the aural rehabilitation process following receipt of a CI device.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1hv5
via IFTTT

Better Visuospatial Working Memory in Adults Who Report Profound Deafness Compared to Those With Normal or Poor Hearing: Data From the UK Biobank Resource

imageExperimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0rhT
via IFTTT

Impact of Hearing Aid Technology on Outcomes in Daily Life II: Speech Understanding and Listening Effort

imageObjectives: Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. Design: For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants’ speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Results: Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. Conclusions: The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily life for this population. For HA providers to make evidence-based recommendations to their clientele with hearing impairment it is essential that further independent research investigates the relative benefit/deficit of different levels of hearing technology across brands and manufacturers in these and other real-world listening domains.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1m1S
via IFTTT

Progressive Hearing Loss in Early Childhood

imageObjectives: Deterioration in hearing thresholds in children is of concern due to the effect on language development. Before universal newborn hearing screening (UNHS), accurate information on the progression of hearing loss was difficult to obtain due to limited information on hearing loss onset. The objective of this population-based study was to document the proportion of children who experienced progressive loss in a cohort followed through a UNHS program in one region of Canada. We explored risk factors for progression including risk indicators, audiologic, and clinical characteristics of children. We also investigated deterioration in hearing as a function of age. For this study, two working definitions of progressive hearing loss were adopted: (1) a change of ≥20 dB in the 3 frequencies (500, 1000, and 2000 Hz) pure-tone average, and (2) a decrease of ≥10 dB at two or more adjacent frequencies between 500 and 4000 Hz or a decrease in 15 dB at one octave frequency in the same frequency range. Design: Population-based data were collected prospectively on a cohort of children identified from 2003 to 2013 after the implementation of UNHS. Clinical characteristics including risk indicators (as per Joint Committee on Infant Hearing), age at diagnosis, type and severity of hearing loss, and initial audiologic information were recorded when children were first identified with hearing loss. Serial audiometric results were extracted from the medical charts for this study. Differences between children with progressive and stable hearing loss were explored using χ2 tests. Association between risk indicators and progressive hearing loss was assessed through logistic regression. The cumulative amount of deterioration in hearing from 1 to 4 years of age was also examined. Results: Our analysis of 330 children (251 exposed to screening) with detailed audiologic records showed that 158 (47.9%) children had some deterioration (at least ≥10 dB and) in hearing thresholds in at least one ear. The 158 children included 76 (48.1%) with ≥20 dB loss in pure-tone average in at least one ear and 82 (51.9%) with less deterioration in hearing levels (≥10 but

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1oXn
via IFTTT

Tinnitus Self-Efficacy and Other Tinnitus Self-Report Variables in Patients With and Without Post-Traumatic Stress Disorder

imageObjective: Individuals with tinnitus and co-occurring psychological conditions typically rate their tinnitus as more disturbing than individuals without such comorbidities. Little is known about how tinnitus self-efficacy, or the confidence that individuals have in their abilities to successfully manage the effects of tinnitus, is influenced by mental or psychological health (PH) status. The purpose of this study was to examine the influence of psychological state on tinnitus perceptions and tinnitus self-efficacy in individuals with chronic tinnitus. Design: Observational study. Three groups (N = 199) were examined and included: (1) those with tinnitus without a concurrent psychological condition (tinnitus-only; n = 103), (2) those with tinnitus and concurrent PH condition other than post-traumatic stress disorder (PTSD; tinnitus + PH; n = 34), and (3) those with tinnitus and PTSD (tinnitus + PTSD; n = 62). The Self-Efficacy for Tinnitus Management Questionnaire (SETMQ) was administered. Responses on the SETMQ were compared among the groups, as well as to other indicators of tinnitus perception such as (1) the percentage of time tinnitus was audible (tinnitus awareness), (2) the percentage of time tinnitus was distressing/bothersome, (3) tinnitus loudness, (4) tinnitus handicap inventory scores, (5) subjective ratings of degree of hearing loss, and (6) subjective ratings of sound tolerance problems. Results: The tinnitus + PTSD group reported significantly poorer tinnitus self-efficacy levels on average than the tinnitus-only group on all SETMQ subscales and poorer self-efficacy levels than the tinnitus + PH group for most subscales (except for routine management and devices). Tinnitus self-efficacy levels were similar between the tinnitus + PH and tinnitus-only groups except for the emotional response subscale in which the tinnitus-only patients reported higher self-efficacy on average than both the other groups. Group differences were not seen for tinnitus loudness ratings nor for the amount of time individuals were aware of their tinnitus. Group differences were observed for the percentage of time tinnitus was distressing/bothersome, self-reported degree of hearing loss, sound tolerance problems ratings, and responses on the tinnitus handicap inventory (THI). In general, the group differences revealed patient ratings for the tinnitus-only group were least severe, followed by the tinnitus + PH group, and the tinnitus + PTSD group rated tinnitus effects as most severe. With all patient responses, the tinnitus + PTSD group was found to be significantly more affected by tinnitus than the tinnitus-only group; in some cases, the responses were similar between the tinnitus + PTSD and tinnitus + PH group and in other cases, responses were similar between the tinnitus + PH group and the tinnitus-only group. Conclusions: Tinnitus self-efficacy, along with other self-assessed tinnitus characteristics, varied across groups distinguished by PH diagnoses. In general, individuals with tinnitus and concurrent PTSD reported significantly poorer tinnitus self-efficacy and more handicapping tinnitus effects when compared to individuals with other psychological conditions or those with tinnitus alone. The group differences highlighted the need to consider tinnitus self-efficacy in intervention strategies, particularly for patients with tinnitus and concurrent PTSD as the results reiterated the unique ability of PTSD to interact in powerful and disturbing ways with the tinnitus experience and with patients’ coping ability.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0Dhb
via IFTTT

Hearing Instruments for Unilateral Severe-to-Profound Sensorineural Hearing Loss in Adults: A Systematic Review and Meta-Analysis

imageObjectives: A systematic review of the literature and meta-analysis was conducted to assess the nature and quality of the evidence for the use of hearing instruments in adults with a unilateral severe to profound sensorineural hearing loss. Design: The PubMed, EMBASE, MEDLINE, Cochrane, CINAHL, and DARE databases were searched with no restrictions on language. The search included articles from the start of each database until February 11, 2015. Studies were included that (a) assessed the impact of any form of hearing instrument, including devices that reroute signals between the ears or restore aspects of hearing to a deaf ear, in adults with a sensorineural severe to profound loss in one ear and normal or near-normal hearing in the other ear; (b) compared different devices or compared a device with placebo or the unaided condition; (c) measured outcomes in terms of speech perception, spatial listening, or quality of life; (d) were prospective controlled or observational studies. Studies that met prospectively defined criteria were subjected to random effects meta-analyses. Results: Twenty-seven studies reported in 30 articles were included. The evidence was graded as low-to-moderate quality having been obtained primarily from observational before-after comparisons. The meta-analysis identified statistically significant benefits to speech perception in noise for devices that rerouted the speech signals of interest from the worse ear to the better ear using either air or bone conduction (mean benefit, 2.5 dB). However, these devices also degraded speech understanding significantly and to a similar extent (mean deficit, 3.1 dB) when noise was rerouted to the better ear. Data on the effects of cochlear implantation on speech perception could not be pooled as the prospectively defined criteria for meta-analysis were not met. Inconsistency in the assessment of outcomes relating to sound localization also precluded the synthesis of evidence across studies. Evidence for the relative efficacy of different devices was sparse but a statistically significant advantage was observed for rerouting speech signals using abutment-mounted bone conduction devices when compared with outcomes after preoperative trials of air conduction devices when speech and noise were colocated (mean benefit, 1.5 dB). Patients reported significant improvements in hearing-related quality of life with both rerouting devices and following cochlear implantation. Only two studies measured health-related quality of life and findings were inconclusive. Conclusions: Devices that reroute sounds from an ear with a severe to profound hearing loss to an ear with minimal hearing loss may improve speech perception in noise when signals of interest are located toward the impaired ear. However, the same device may also degrade speech perception as all signals are rerouted indiscriminately, including noise. Although the restoration of functional hearing in both ears through cochlear implantation could be expected to provide benefits to speech perception, the inability to synthesize evidence across existing studies means that such a conclusion cannot yet be made. For the same reason, it remains unclear whether cochlear implantation can improve the ability to localize sounds despite restoring bilateral input. Prospective controlled studies that measure outcomes consistently and control for selection and observation biases are required to improve the quality of the evidence for the provision of hearing instruments to patients with unilateral deafness and to support any future recommendations for the clinical management of these patients.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0UAv
via IFTTT

The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment

imageObjective: The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. Design: In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Results: Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. Conclusions: These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu33Mu
via IFTTT

Association Between Osteoporosis/Osteopenia and Vestibular Dysfunction in South Korean Adults

imageObjective: The associations of osteoporosis/osteopenia with vestibular dysfunction have not been well evaluated and conflicting results have been reported. The purpose of this study is to examine the relation of low bone mineral density (BMD) with vestibular dysfunction. Design: The authors conducted a cross-sectional study in 3579 Korean adults aged 50 years and older who participated in the 2009 to 2010 Korea National Health and Nutrition Examination Survey. BMD was measured by dual energy X ray absorptiometry. Vestibular dysfunction was evaluated using the modified Romberg test of standing balance on firm and compliant support surfaces. Data were analyzed in 2015. Multiple logistic regression analysis was used to compute odds ratios (ORs) and 95% confidence intervals (CIs). Results: The prevalence of vestibular dysfunction was 4.3 ± 0.5%. After adjustment for potential confounders, the adjusted ORs for vestibular dysfunction based on BMD were 1.00 (reference) for normal BMD, 2.21 (95% CI: 1.08, 4.50) for osteopenia, and 2.47 (95% CI: 1.05, 5.81) for osteoporosis (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0D0F
via IFTTT

Reflectance Measures from Infant Ears With Normal Hearing and Transient Conductive Hearing Loss

imageObjective: The objective is to develop methods to utilize newborn reflectance measures for the identification of middle-ear transient conditions (e.g., middle-ear fluid) during the newborn period and ultimately during the first few months of life. Transient middle-ear conditions are a suspected source of failure to pass a newborn hearing screening. The ability to identify a conductive loss during the screening procedure could enable the referred ear to be either (1) cleared of a middle-ear condition and recommended for more extensive hearing assessment as soon as possible, or (2) suspected of a transient middle-ear condition, and if desired, be rescreened before more extensive hearing assessment. Design: Reflectance measurements are reported from full-term, healthy, newborn babies in which one ear referred and one ear passed an initial auditory brainstem response newborn hearing screening and a subsequent distortion product otoacoustic emission screening on the same day. These same subjects returned for a detailed follow-up evaluation at age 1 month (range 14 to 35 days). In total, measurements were made on 30 subjects who had a unilateral refer near birth (during their first 2 days of life) and bilateral normal hearing at follow-up (about 1 month old). Three specific comparisons were made: (1) Association of ear’s state with power reflectance near birth (referred versus passed ear), (2) Changes in power reflectance of normal ears between newborn and 1 month old (maturation effects), and (3) Association of ear’s newborn state (referred versus passed) with ear’s power reflectance at 1 month. In addition to these measurements, a set of preliminary data selection criteria were developed to ensure that analyzed data were not corrupted by acoustic leaks and other measurement problems. Results: Within 2 days of birth, the power reflectance measured in newborn ears with transient middle-ear conditions (referred newborn hearing screening and passed hearing assessment at age 1 month) was significantly greater than power reflectance on newborn ears that passed the newborn hearing screening across all frequencies (500 to 6000 Hz). Changes in power reflectance in normal ears from newborn to 1 month appear in approximately the 2000 to 5000 Hz range but are not present at other frequencies. The power reflectance at age 1 month does not depend significantly on the ear’s state near birth (refer or pass hearing screening) for frequencies above 700 Hz; there might be small differences at lower frequencies. Conclusions: Power reflectance measurements are significantly different for ears that pass newborn hearing screening and ears that refer with middle-ear transient conditions. At age 1 month, about 90% of ears that referred at birth passed an auditory brainstem response hearing evaluation; within these ears the power reflectance at 1 month did not differ between the ear that initially referred at birth and the ear that passed the hearing screening at birth for frequencies above 700 Hz. This study also proposes a preliminary set of criteria for determining when reflectance measures on young babies are corrupted by acoustic leaks, probes against the ear canal, or other measurement problems. Specifically proposed are “data selection criteria” that depend on the power reflectance, impedance magnitude, and impedance angle. Additional data collected in the future are needed to improve and test these proposed criteria.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1UVn
via IFTTT

The Physiological Basis and Clinical Use of the Binaural Interaction Component of the Auditory Brainstem Response

imageThe auditory brainstem response (ABR) is a sound-evoked noninvasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, the authors discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. The authors review how interaural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu31Em
via IFTTT

A Novel Algorithm to Derive Spread of Excitation Based on Deconvolution

imageObjective: The width of the spread of excitation (SOE) curve has been widely thought to represent an estimate of SOE. Therefore, correlates between psychophysical parameters, such as pitch discrimination and speech perception, and the width of SOE curves, have long been investigated. However, to date, no relationships between these objective and subjective measurements have yet been determined. In a departure from the current thinking, the authors now propose that the SOE curve, recorded with forward masking, is the equivalent of a convolution operation. As such, deconvolution would be expected to retrieve the excitation areas attributable to either masker or probe, potentially more closely revealing the actual neural SOE. This study aimed to develop a new analytical tool with which to derive SOE using this principle. Design: Intraoperative SOE curve measurements of 16 subjects, implanted with an Advanced Bionics implant, were analyzed. Evoked compound action potential (ECAP)-based SOE curves were recorded on electrodes 3 to 16, using the forward masker paradigm, with variable masker. The measured SOE curves were then compared with predicted SOE curves, built by the convolution of basic excitation density profiles (EDPs). Predicted SOE curves were fitted to the measured SOEs by iterative adjustment of the EDPs for the masker and the probe. Results: It was possible to generate a good fit between the predicted and measured SOE curves, inclusive of their asymmetry. The rectangular EDP was of least value in terms of its ability to generate a good fit; smoother SOE curves were modeled using the exponential or Gaussian EDPs. In most subjects, the EDP width (i.e., the size of the excitation area) gradually changed from wide at the apex of the electrode array, to narrow at the base. A comparison of EDP widths to SOE curve widths, as calculated in the literature, revealed that the EDPs now provide a measure of the SOE that is qualitatively distinct from that provided using conventional methods. Conclusions: This study shows that an eCAP-based SOE curve, measured with forward masking, can be treated as a convolution of EDPs for masker and probe. The poor fit achieved for the measured and modeled data using the rectangular EDP, emphasizes the requirement for a sloping excitation area to mimic actual SOE recordings. Our deconvolution method provides an explanation for the frequently observed asymmetry of SOE curves measured along the electrode array, as this is a consequence of a wider excitation area in the apical part of the cochlea, in the absence of any asymmetry in the actual EDP. In addition, broader apical EDPs underlie the higher eCAP amplitudes found for apical stimulation.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1heA
via IFTTT

Intelligibility of the Patient’s Speech Predicts the Likelihood of Cochlear Implant Success in Prelingually Deaf Adults

imageObjectives: The objective of this study was to determine the validity and clinical applicability of intelligibility of the patient’s own speech, measured via a Vowel Identification Test (VOW), as a predictor of speech perception for prelingually deafened adults after 1 year of cochlear implant use. Specifically, the objective was to investigate the probability that a prelingually deaf patient, given a VOW score above (or below) a chosen cutoff point, reaches a postimplant speech perception score above (or below) a critical value. High predictive values for VOW could support preimplant counseling and implant candidacy decisions in individual patients. Design: One hundred and fifty-two adult cochlear implant candidates with prelingual hearing impairment or deafness took part as speakers in a VOW; 149 speakers completed the test successfully. Recordings of the speech stimuli, consisting of nonsense words of the form [h]-V-[t], where V represents one of 15 vowels/diphthongs ([ ]), were presented to two normal-hearing listeners. VOW score was expressed as the percentage of vowels identified correctly (averaged over the 2 listeners). Subsequently, the 149 participants enrolled in the cochlear implant selection procedure. Extremely poor speakers were excluded from implantation, as well as patients who did not meet regular selection criteria as developed for postlingually deafened patients. From the 149 participants, 92 were selected for implantation. For the implanted group, speech perception data were collected at 1-year postimplantation. Results: Speech perception score at 1-year postimplantation (available for 77 of the 92 implanted participants) correlated positively with preimplant intelligibility of the patient’s speech, as represented by VOW (r = 0.79, p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0uKn
via IFTTT

Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech

imageObjectives: Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. Design: Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners’ ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. Results: Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. Conclusions: Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners’ ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0oCJ
via IFTTT

Human Envelope Following Responses to Amplitude Modulation: Effects of Aging and Modulation Depth

imageObjective: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. Design: Participants: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group (“O1”; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group (“O2”; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. Results: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. Conclusions: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0CK9
via IFTTT

Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing

imageObjectives: This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Design: Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Results: Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. Conclusions: The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1tdK
via IFTTT

The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users

imageObjective: This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Design: Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. Results: The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. Conclusions: The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1nmz
via IFTTT

Test-Retest Reliability of the Binaural Interaction Component of the Auditory Brainstem Response

imageObjectives: The binaural interaction component (BIC) is the residual auditory brainstem response (ABR) obtained after subtracting the sum of monaurally evoked from binaurally evoked ABRs. The DN1 peak—the first negative peak of the BIC—has been postulated to have diagnostic value as a biomarker for binaural hearing abilities. Indeed, not only do DN1 amplitudes depend systematically upon binaural cues to location (interaural time and level differences), but they are also predictive of central hearing deficits in humans. A prominent issue in using BIC measures as a diagnostic biomarker is that DN1 amplitudes not only exhibit considerable variability across subjects, but also within subjects across different measurement sessions. Design: In this study, the authors investigate the DN1 amplitude measurement reliability by conducting repeated measurements on different days in eight adult guinea pigs. Results: Despite consistent ABR thresholds, ABR and DN1 amplitudes varied between and within subjects across recording sessions. However, the study analysis reveals that DN1 amplitudes varied proportionally with parent monaural ABR amplitudes, suggesting that common experimental factors likely account for the variability in both waveforms. Despite this variability, the authors show that the shape of the dependence between DN1 amplitude and interaural time difference is preserved. The authors then provide a BIC normalization strategy using monaural ABR amplitude that reduces the variability of DN1 peak measurements. Finally, the authors evaluate this normalization strategy in the context of detecting changes of the DN1 amplitude-to-interaural time difference relationship. Conclusions: The study results indicate that the BIC measurement variability can be reduced by a factor of two by performing a simple and objective normalization operation. The authors discuss the potential for this normalized BIC measure as a biomarker for binaural hearing.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0wCl
via IFTTT

Changes in the Compressive Nonlinearity of the Cochlea During Early Aging: Estimates From Distortion OAE Input/Output Functions

imageObjectives: The level-dependent growth of distortion product otoacoustic emissions (DPOAEs) provides an indirect metric of cochlear compressive nonlinearity. Recent evidence suggests that aging reduces nonlinear distortion emissions more than those associated with linear reflection. Therefore, in this study, we generate input/output (I/O) functions from the isolated distortion component of the DPOAE to probe the effects of early aging on the compressive nonlinearity of the cochlea. Design: Thirty adults whose ages ranged from 18 to 64 years participated in this study, forming a continuum of young to middle-age subjects. When necessary for analyses, subjects were divided into a young-adult group with a mean age of 21 years, and a middle-aged group with a mean age of 52 years. All young-adult subjects and 11 of the middle-aged subjects had normal hearing; 4 middle-aged ears had slight audiometric threshold elevation at mid-to-high frequencies. DPOAEs (2f1 − f2) were recorded using primary tones swept upward in frequency from 0.5 to 8 kHz, and varied from 25 to 80 dB sound pressure level. The nonlinear distortion component of the total DPOAE was separated and used to create I/O functions at one-half octave intervals from 1.3 to 7.4 kHz. Four features of OAE compression were extracted from a fit to these functions: compression threshold, range of compression, compression slope, and low-level growth. These values were compared between age groups and correlational analyses were conducted between OAE compression threshold and age with audiometric threshold controlled. Results: Older ears had reduced DPOAE amplitude compared with young-adult ears. The OAE compression threshold was elevated at test frequencies above 2 kHz in the middle-aged subjects by 19 dB (35 versus 54 dB SPL), thereby reducing the compression range. In addition, middle-aged ears showed steeper amplitude growth beyond the compression threshold. Audiometric threshold was initially found to be a confound in establishing the relationship between compression and age; however, statistical analyses allowed us to control its variance. Correlations performed while controlling for age differences in high-frequency audiometric thresholds showed significant relationships between the DPOAE I/O compression threshold and age: Older subjects tended to have elevated compression thresholds compared with younger subjects and an extended range of monotonic growth. Conclusions: Cochlear manifestations of nonlinearity, such as the DPOAE, weaken during early aging, and DPOAE I/O functions become linearized. Commensurate changes in high-frequency audiometric thresholds are not sufficient to fully explain these changes. The results suggest that age-related changes in compressive nonlinearity could produce a reduced dynamic range of hearing, and contribute to perceptual difficulties in older listeners.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu15Mf
via IFTTT

Using the Digits-In-Noise Test to Estimate Age-Related Hearing Loss

imageObjective: Age-related hearing loss is common in the elderly population. Timely detection and targeted counseling can lead to adequate treatment with hearing aids. The Digits-In-Noise (DIN) test was developed as a relatively simple test to assess hearing acuity. It is a potentially powerful test for the screening of large populations, including the elderly. However, until to date, no sensitivity or specificity rates for detecting hearing loss were reported in a general elderly population. The purpose of this study was to evaluate the ability of the DIN test to screen for mild and moderate hearing loss in the elderly. Design: Data of pure-tone audiometry and the DIN test were collected from 3327 adults ages above 50 (mean: 65), as part of the Rotterdam Study, a large population-based cohort study. Sensitivity and specificity of the DIN test for detecting hearing loss were calculated by comparing speech reception threshold (SRT) with pure-tone average threshold at 0.5, 1, 2, and 4 kHz (PTA0.5,1,2,4). Receiver operating characteristics were calculated for detecting >20 and >35 dB HL average hearing loss at the best ear. Results: Hearing loss varied greatly between subjects and, as expected, increased with age. High frequencies and men were more severely affected. A strong correlation (R = 0.80, p

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1pdT
via IFTTT

Comparing the Accuracy and Speed of Manual and Tracking Methods of Measuring Hearing Thresholds

imageObjectives: The reliability of hearing thresholds obtained using the standard clinical method (modified Hughson-Westlake) has been the focus of previous investigation given the potential for tester bias (Margolis et al., 2015). In recent years, more precise methods in laboratory studies have been used that control for sources of bias, often at the expense of longer test times. The aim of this pilot study was to compare test-retest variability and time requirement to obtain a full set of hearing thresholds (0.125 – 20 kHz) of the clinical modified Hughson-Westlake (manual) method with that of the automated, modified (single frequency) Békésy tracking method (Lee et al., 2012). Design: Hearing thresholds from 10 subjects (8 female) between 19 to 47 years old (mean = 28.3; SD = 9.4) were measured using two methods with identical test hardware and calibration. Thresholds were obtained using the modified Hughson-Westlake (manual) method and the Békésy method (tracking). Measurements using each method were repeated after one-week. Test-retest variability within each measurement method was computed across test sessions. Results from each test method as well as test time across methods were compared. Results: Test-retest variability was comparable and statistically indistinguishable between the two test methods. Thresholds were approximately 5 dB lower when measured using the tracking method. This difference was not statistically significant. The manual method of measuring thresholds was faster by approximately 4 minutes. Both methods required less time (~ 2 mins) in the second session as compared to the first. Conclusion: Hearing thresholds obtained using the manual method can be just as reliable as those obtained using the tracking method over the large frequency range explored here (0.125 – 20 kHz). These results perhaps point to the importance of equivalent and valid calibration techniques that can overcome frequency dependent discrepancies, most prominent at higher frequencies, in the sound pressure delivered to the ear.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0NVJ
via IFTTT

Neural Correlates of Phonetic Learning in Postlingually Deafened Cochlear Implant Listeners

imageObjective: The present training study aimed to examine the fine-scale behavioral and neural correlates of phonetic learning in adult postlingually deafened cochlear implant (CI) listeners. The study investigated whether high variability identification training improved phonetic categorization of the /ba/–/da/ and /wa/–/ja/ speech contrasts and whether any training-related improvements in phonetic perception were correlated with neural markers associated with phonetic learning. It was hypothesized that training would sharpen phonetic boundaries for the speech contrasts and that changes in behavioral sensitivity would be associated with enhanced mismatch negativity (MMN) responses to stimuli that cross a phonetic boundary relative to MMN responses evoked using stimuli from the same phonetic category. Design: A computer-based training program was developed that featured multitalker variability and adaptive listening. The program was designed to help CI listeners attend to the important second formant transition cue that categorizes the /ba/–/da/ and /wa/–/ja/ contrasts. Nine adult CI listeners completed the training and 4 additional CI listeners that did not undergo training were included to assess effects of procedural learning. Behavioral pre-post tests consisted of identification and discrimination of the synthetic /ba/–/da/ and /wa/–/ja/ speech continua. The electrophysiologic MMN response elicited by an across phoneme category pair and a within phoneme category pair that differed by an acoustically equivalent amount was derived at pre-post test intervals for each speech contrast as well. Results: Training significantly enhanced behavioral sensitivity across the phonetic boundary and significantly altered labeling of the stimuli along the /ba/–/da/ continuum. While training only slightly altered identification and discrimination of the /wa/–/ja/ continuum, trained CI listeners categorized the /wa/–/ja/ contrast more efficiently than the /ba/–/da/ contrast across pre-post test sessions. Consistent with behavioral results, pre-post EEG measures showed the MMN amplitude to the across phoneme category pair significantly increased with training for both the /ba/–/da/ and /wa/–/ja/ contrasts, but the MMN was unchanged with training for the corresponding within phoneme category pairs. Significant brain–behavior correlations were observed between changes in the MMN amplitude evoked by across category phoneme stimuli and changes in the slope of identification functions for the trained listeners for both speech contrasts. Conclusions: The brain and behavior data of the present study provide evidence that substantial neural plasticity for phonetic learning in adult postlingually deafened CI listeners can be induced by high variability identification training. These findings have potential clinical implications related to the aural rehabilitation process following receipt of a CI device.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1hv5
via IFTTT

Better Visuospatial Working Memory in Adults Who Report Profound Deafness Compared to Those With Normal or Poor Hearing: Data From the UK Biobank Resource

imageExperimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0rhT
via IFTTT

Impact of Hearing Aid Technology on Outcomes in Daily Life II: Speech Understanding and Listening Effort

imageObjectives: Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. Design: For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants’ speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Results: Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. Conclusions: The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily life for this population. For HA providers to make evidence-based recommendations to their clientele with hearing impairment it is essential that further independent research investigates the relative benefit/deficit of different levels of hearing technology across brands and manufacturers in these and other real-world listening domains.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1m1S
via IFTTT

Progressive Hearing Loss in Early Childhood

imageObjectives: Deterioration in hearing thresholds in children is of concern due to the effect on language development. Before universal newborn hearing screening (UNHS), accurate information on the progression of hearing loss was difficult to obtain due to limited information on hearing loss onset. The objective of this population-based study was to document the proportion of children who experienced progressive loss in a cohort followed through a UNHS program in one region of Canada. We explored risk factors for progression including risk indicators, audiologic, and clinical characteristics of children. We also investigated deterioration in hearing as a function of age. For this study, two working definitions of progressive hearing loss were adopted: (1) a change of ≥20 dB in the 3 frequencies (500, 1000, and 2000 Hz) pure-tone average, and (2) a decrease of ≥10 dB at two or more adjacent frequencies between 500 and 4000 Hz or a decrease in 15 dB at one octave frequency in the same frequency range. Design: Population-based data were collected prospectively on a cohort of children identified from 2003 to 2013 after the implementation of UNHS. Clinical characteristics including risk indicators (as per Joint Committee on Infant Hearing), age at diagnosis, type and severity of hearing loss, and initial audiologic information were recorded when children were first identified with hearing loss. Serial audiometric results were extracted from the medical charts for this study. Differences between children with progressive and stable hearing loss were explored using χ2 tests. Association between risk indicators and progressive hearing loss was assessed through logistic regression. The cumulative amount of deterioration in hearing from 1 to 4 years of age was also examined. Results: Our analysis of 330 children (251 exposed to screening) with detailed audiologic records showed that 158 (47.9%) children had some deterioration (at least ≥10 dB and) in hearing thresholds in at least one ear. The 158 children included 76 (48.1%) with ≥20 dB loss in pure-tone average in at least one ear and 82 (51.9%) with less deterioration in hearing levels (≥10 but

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1oXn
via IFTTT

Tinnitus Self-Efficacy and Other Tinnitus Self-Report Variables in Patients With and Without Post-Traumatic Stress Disorder

imageObjective: Individuals with tinnitus and co-occurring psychological conditions typically rate their tinnitus as more disturbing than individuals without such comorbidities. Little is known about how tinnitus self-efficacy, or the confidence that individuals have in their abilities to successfully manage the effects of tinnitus, is influenced by mental or psychological health (PH) status. The purpose of this study was to examine the influence of psychological state on tinnitus perceptions and tinnitus self-efficacy in individuals with chronic tinnitus. Design: Observational study. Three groups (N = 199) were examined and included: (1) those with tinnitus without a concurrent psychological condition (tinnitus-only; n = 103), (2) those with tinnitus and concurrent PH condition other than post-traumatic stress disorder (PTSD; tinnitus + PH; n = 34), and (3) those with tinnitus and PTSD (tinnitus + PTSD; n = 62). The Self-Efficacy for Tinnitus Management Questionnaire (SETMQ) was administered. Responses on the SETMQ were compared among the groups, as well as to other indicators of tinnitus perception such as (1) the percentage of time tinnitus was audible (tinnitus awareness), (2) the percentage of time tinnitus was distressing/bothersome, (3) tinnitus loudness, (4) tinnitus handicap inventory scores, (5) subjective ratings of degree of hearing loss, and (6) subjective ratings of sound tolerance problems. Results: The tinnitus + PTSD group reported significantly poorer tinnitus self-efficacy levels on average than the tinnitus-only group on all SETMQ subscales and poorer self-efficacy levels than the tinnitus + PH group for most subscales (except for routine management and devices). Tinnitus self-efficacy levels were similar between the tinnitus + PH and tinnitus-only groups except for the emotional response subscale in which the tinnitus-only patients reported higher self-efficacy on average than both the other groups. Group differences were not seen for tinnitus loudness ratings nor for the amount of time individuals were aware of their tinnitus. Group differences were observed for the percentage of time tinnitus was distressing/bothersome, self-reported degree of hearing loss, sound tolerance problems ratings, and responses on the tinnitus handicap inventory (THI). In general, the group differences revealed patient ratings for the tinnitus-only group were least severe, followed by the tinnitus + PH group, and the tinnitus + PTSD group rated tinnitus effects as most severe. With all patient responses, the tinnitus + PTSD group was found to be significantly more affected by tinnitus than the tinnitus-only group; in some cases, the responses were similar between the tinnitus + PTSD and tinnitus + PH group and in other cases, responses were similar between the tinnitus + PH group and the tinnitus-only group. Conclusions: Tinnitus self-efficacy, along with other self-assessed tinnitus characteristics, varied across groups distinguished by PH diagnoses. In general, individuals with tinnitus and concurrent PTSD reported significantly poorer tinnitus self-efficacy and more handicapping tinnitus effects when compared to individuals with other psychological conditions or those with tinnitus alone. The group differences highlighted the need to consider tinnitus self-efficacy in intervention strategies, particularly for patients with tinnitus and concurrent PTSD as the results reiterated the unique ability of PTSD to interact in powerful and disturbing ways with the tinnitus experience and with patients’ coping ability.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0Dhb
via IFTTT

Hearing Instruments for Unilateral Severe-to-Profound Sensorineural Hearing Loss in Adults: A Systematic Review and Meta-Analysis

imageObjectives: A systematic review of the literature and meta-analysis was conducted to assess the nature and quality of the evidence for the use of hearing instruments in adults with a unilateral severe to profound sensorineural hearing loss. Design: The PubMed, EMBASE, MEDLINE, Cochrane, CINAHL, and DARE databases were searched with no restrictions on language. The search included articles from the start of each database until February 11, 2015. Studies were included that (a) assessed the impact of any form of hearing instrument, including devices that reroute signals between the ears or restore aspects of hearing to a deaf ear, in adults with a sensorineural severe to profound loss in one ear and normal or near-normal hearing in the other ear; (b) compared different devices or compared a device with placebo or the unaided condition; (c) measured outcomes in terms of speech perception, spatial listening, or quality of life; (d) were prospective controlled or observational studies. Studies that met prospectively defined criteria were subjected to random effects meta-analyses. Results: Twenty-seven studies reported in 30 articles were included. The evidence was graded as low-to-moderate quality having been obtained primarily from observational before-after comparisons. The meta-analysis identified statistically significant benefits to speech perception in noise for devices that rerouted the speech signals of interest from the worse ear to the better ear using either air or bone conduction (mean benefit, 2.5 dB). However, these devices also degraded speech understanding significantly and to a similar extent (mean deficit, 3.1 dB) when noise was rerouted to the better ear. Data on the effects of cochlear implantation on speech perception could not be pooled as the prospectively defined criteria for meta-analysis were not met. Inconsistency in the assessment of outcomes relating to sound localization also precluded the synthesis of evidence across studies. Evidence for the relative efficacy of different devices was sparse but a statistically significant advantage was observed for rerouting speech signals using abutment-mounted bone conduction devices when compared with outcomes after preoperative trials of air conduction devices when speech and noise were colocated (mean benefit, 1.5 dB). Patients reported significant improvements in hearing-related quality of life with both rerouting devices and following cochlear implantation. Only two studies measured health-related quality of life and findings were inconclusive. Conclusions: Devices that reroute sounds from an ear with a severe to profound hearing loss to an ear with minimal hearing loss may improve speech perception in noise when signals of interest are located toward the impaired ear. However, the same device may also degrade speech perception as all signals are rerouted indiscriminately, including noise. Although the restoration of functional hearing in both ears through cochlear implantation could be expected to provide benefits to speech perception, the inability to synthesize evidence across existing studies means that such a conclusion cannot yet be made. For the same reason, it remains unclear whether cochlear implantation can improve the ability to localize sounds despite restoring bilateral input. Prospective controlled studies that measure outcomes consistently and control for selection and observation biases are required to improve the quality of the evidence for the provision of hearing instruments to patients with unilateral deafness and to support any future recommendations for the clinical management of these patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0UAv
via IFTTT