Τετάρτη 24 Φεβρουαρίου 2016

A Randomized Controlled Trial to Evaluate the Benefits of a Multimedia Educational Program for First-Time Hearing Aid Users

imageObjectives: The aims of this study were to (1) develop a series of short interactive videos (or reusable learning objects [RLOs]) covering a broad range of practical and psychosocial issues relevant to the auditory rehabilitation for first-time hearing aid users; (2) establish the accessibility, take-up, acceptability and adherence of the RLOs; and (3) assess the benefits and cost-effectiveness of the RLOs. Design: The study was a single-center, prospective, randomized controlled trial with two arms. The intervention group (RLO+, n = 103) received the RLOs plus standard clinical service including hearing aid(s) and counseling, and the waitlist control group (RLO−, n = 100) received standard clinical service only. The effectiveness of the RLOs was assessed 6-weeks posthearing aid fitting. Seven RLOs (total duration 1 hr) were developed using a participatory, community of practice approach involving hearing aid users and audiologists. RLOs included video clips, illustrations, animations, photos, sounds and testimonials, and all were subtitled. RLOs were delivered through DVD for TV (50.6%) and PC (15.2%), or via the internet (32.9%). Results: RLO take-up was 78%. Adherence overall was at least 67%, and 97% in those who attended the 6-week follow-up. Half the participants watched the RLOs two or more times, suggesting self-management of their hearing loss, hearing aids, and communication. The RLOs were rated as highly useful and the majority of participants agreed the RLOs were enjoyable, improved their confidence and were preferable to written information. Postfitting, there was no significant between-group difference in the primary outcome measure, overall hearing aid use. However, there was significantly greater hearing aid use in the RLO+ group for suboptimal users. Furthermore, the RLO+ group had significantly better knowledge of practical and psychosocial issues, and significantly better practical hearing aid skills than the RLO− group. Conclusions: The RLOs were shown to be beneficial to first-time hearing aid users across a range of quantitative and qualitative measures. This study provides evidence to suggest that the RLOs may provide valuable learning and educational support for first-time hearing aid users and could be used to supplement clinical rehabilitation practice.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHVSm
via IFTTT

Factors Predicting Postoperative Unilateral and Bilateral Speech Recognition in Adult Cochlear Implant Recipients with Acoustic Hearing

imageObjectives: The first objective was to examine factors that could be predictive of postoperative unilateral (cochlear implant alone) speech recognition ability in a group of subjects with greater degrees of preoperative acoustic hearing than has been previously examined. Second, the study aimed to identify factors predictive of speech recognition in the best-aided, bilateral listening condition. Design: Participants were 65 postlinguistically hearing-impaired adults with preoperative phoneme in quiet scores of greater than or equal to 46% in one or both ears. Preoperative demographic and audiometric factors were assessed as predictors of 12-month postoperative unilateral and bilateral monosyllabic word scores in quiet and of bilateral speech reception threshold (SRT) in babble. Results: The predictive regression model accounted for 34.1% of the variance in unilateral word recognition scores in quiet. Factors that predicted better scores included: a shorter duration of severe to profound hearing loss in the implanted ear; and poorer pure-tone-averaged thresholds in the contralateral ear. Predictive regression models of postimplantation bilateral function accounted for 36.0% of the variance for word scores in quiet, and 30.9% of the variance for SRT in noise. A shorter duration of severe to profound hearing loss in the implanted ear, a lower age at the time of implantation, and better contralateral hearing thresholds were associated with higher bilateral word recognition in quiet and SRT in noise. Conclusions: In this group of cochlear implant recipients with preoperative acoustic hearing, a shorter duration of severe to profound hearing loss in the implanted ear was shown to be predictive of better unilateral and bilateral outcomes. However, further research is warranted to better understand the impact of that factor in a larger number of subjects with long-term hearing impairment of greater than 30 years. Better contralateral hearing was associated with poorer unilateral word scores with the implanted ear alone, but better absolute bilateral speech recognition. As a result, it is clear that different models would need to be developed to predict unilateral and bilateral postimplantation scores.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHVBY
via IFTTT

Within- and Across-Subject Variability of Repeated Measurements of Medial Olivocochlear-Induced Changes in Transient-Evoked Otoacoustic Emissions

imageObjectives: Measurement of changes in transient-evoked otoacoustic emissions (TEOAEs) caused by activation of the medial olivocochlear reflex (MOCR) may have clinical applications, but the clinical utility is dependent in part on the amount of variability across repeated measurements. The purpose of this study was to investigate the within- and across-subject variability of these measurements in a research setting as a step toward determining the potential clinical feasibility of TEOAE-based MOCR measurements. Design: In 24 normal-hearing young adults, TEOAEs were elicited with 35 dB SL clicks and the MOCR was activated by 35 dB SL broadband noise presented contralaterally. Across a 5-week span, changes in both TEOAE amplitude and phase evoked by MOCR activation (MOC shifts) were measured at four sessions, each consisting of four independent measurements. Efforts were undertaken to reduce the effect of potential confounds, including slow drifts in TEOAE amplitude across time, activation of the middle-ear muscle reflex, and changes in subjects’ attentional states. MOC shifts were analyzed in seven 1/6-octave bands from 1 to 2 kHz. The variability of MOC shifts was analyzed at the frequency band yielding the largest and most stable MOC shift at the first session. Within-subject variability was quantified by the size of the standard deviations across all 16 measurements. Across-subject variability was quantified as the range of MOC shift values across subjects and was also described qualitatively through visual analyses of the data. Results: A large majority of MOC shifts in subjects were statistically significant. Most subjects showed stable MOC shifts across time, as evidenced by small standard deviations and by visual clustering of their data. However, some subjects showed within- and across-session variability that could not be explained by changes in hearing status, middle ear status, or attentional state. Simulations indicated that four baseline measurements were sufficient to predict the expected variability of subsequent measurements. However, the measured variability of subsequent MOC shifts in subjects was often larger than expected (based on the variability present at baseline), indicating the presence of additional variability at subsequent sessions. Conclusions: Results indicated that a wide range of within- and across-subject variability of MOC shifts was present in a group of young normal-hearing individuals. In some cases, very large changes in MOC shifts (e.g., 1.5 to 2 dB) would need to occur before one could attribute the change to either an intervention or pathology, rather than to measurement variability. It appears that MOC shifts, as analyzed in the present study, may be too variable for clinical use, at least in some individuals. Further study is needed to determine the extent to which changes in MOC shifts can be reliably measured across time for clinical purposes.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHVBL
via IFTTT

Text as a Supplement to Speech in Young and Older Adults

imageObjective: The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, the authors tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. The working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that (1) combining auditory and visual text information will result in improved recognition accuracy compared with auditory or visual text information alone, (2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults, and (3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design: Fifteen young adults with normal hearing, 15 older adults with normal hearing, and 15 older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory- and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results: Both young and older adults performed similarly on 9 out of 10 perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions: These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHVln
via IFTTT

Auditory Lexical Decision and Repetition in Children: Effects of Acoustic and Lexical Constraints

imageObjectives: The objective of this study was to identify factors that may detract from children’s ability to identify words they do and do not know. Factors investigated were acoustic constraints stemming from the presence of hearing loss (HL) or an acoustic competitor, and lexical constraints due to an impoverished or cluttered vocabulary. Design: Eleven children with normal hearing (NH) and 11 children with bilateral, mild to moderately severe sensorineural HL were asked to categorize and repeat two-syllable real and nonsense words. Stimuli were amplified and frequency shaped for each child with HL and presented randomly at a level consistent with average conversational speech (65 dB SPL). About half of the children in each group listened in quiet while the other half listened in multitalker babble. In addition to overall performance, responses were judged based on the word category chosen by the child (real or nonsense), the category of the word produced by the child as judged by an examiner (real or nonsense), and the accuracy of the verbal response compared with the stimulus. From these judgments, 10 discrete types of errors were identified. Analyses were conducted for three different combinations of the 10 error categories to best characterize the effects of acoustic and lexical constraints. Results: Performance was highest for real words presented in quiet and poorest for nonsense words presented in multitalker babble. Also, the performance of the children with HL was poorer than that of the children with NH. Error analyses revealed strong effects of acoustic constraints on performance but few effects of lexical constraints. The two most frequently occurring errors were the same for both children with NH and the children with HL and entailed the misperception of nonsense words and the mistaking of nonsense words for real words. However, while both groups of children exhibited these errors in multitalker babble, the children with HL demonstrated these errors in quiet as well. Conclusions: These results suggest that children’s interactions with real and nonsense words are significantly constrained when the acoustic signal is degraded by HL and/or an acoustic competitor. The children’s tendency to repair unknown words into real words in the presence of acoustic interference may be beneficial when perceiving familiar speech, but could also be detrimental if that tendency causes them to miss opportunities to learn new words.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHV4X
via IFTTT

Distribution Characteristics of Air-Bone Gaps: Evidence of Bias in Manual Audiometry

imageObjectives: Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design: The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results: Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions: Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHUht
via IFTTT

Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language

imageObjectives: To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants’ processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Design: Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). Results: (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress patterns in infants with CI was reduced compared with that of infants with NH, (3) both groups showed directional asymmetry in discrimination, that is, increased discrimination from the uncommon to the common stress pattern in Hebrew (/dóti/ versus /dotí/) compared with the reversed condition. Conclusions: The CI device transmitted sufficient acoustic information (amplitude, duration, and fundamental frequency) to allow discrimination between stress patterns in young hearing-impaired infants with CI. The present pattern of results is in support of a discrimination model in which both auditory capabilities and “top–down” interactions are involved. That is, the CI infants detected changes between stressed and unstressed syllables after which they developed a bias for the more common weak–strong stress pattern in Hebrew. The latter suggests that infants with CI were able to extract the statistical distribution of stress patterns by listening to the ambient language even after limited auditory experience with the CI device. To conclude, in relation to processing of lexical stress patterns, infants with CI followed similar developmental milestones as hearing infants thus establishing important prerequisites for early language acquisition.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHUhj
via IFTTT

Asymmetric Hearing Loss in Chinese Workers Exposed to Complex Noise

imageObjectives: Evaluate the audiometric asymmetry in Chinese industrial workers and investigate the effects of noise exposure, sex, and binaural average thresholds on audiometric asymmetry. Design: Data collected from Chinese industrial workers during a cross-sectional study were reanalyzed. Of the 1388 workers, 266 met the inclusion criteria for this study. Each subject underwent a physical examination and an otologic examination and completed a health-related questionnaire. χ2 and t tests were used to examine the differences between the asymmetric and symmetric hearing loss groups. Results: One hundred thirty-one subjects (49.2%) had a binaural hearing threshold difference of 15 dB or more for at least one frequency, and there was no statistically significant difference between the left and right ears. The asymmetric hearing loss group was not exposed to higher cumulative noise levels (t = 0.522, p = 0.602), and there was no dose–response relation between asymmetry and cumulative noise levels (χ2 = 6.502, p = 0.165). Men were 1.849 times more likely to have asymmetry than women were (95% confidence interval, 1.051 to 3.253). Among the workers with higher high-frequency hearing thresholds, audiometric asymmetry was 1.024 times more prevalent than that among those with lower high-frequency hearing thresholds (95% confidence interval, 1.004 to 1.044). Conclusions: The results indicated that occupational noise exposure contributed minimally to asymmetry, whereas sex and binaural average thresholds significantly affected audiometric asymmetry. There was no evidence that the left ears were worse than the right ears.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHS9l
via IFTTT

Immediate and Short-Term Therapeutic Results Between Direction-Changing Positional Nystagmus with Short- and Long-Duration Groups

imageObjectives: Clinicians sometimes treat patients with relatively long-duration geotropic direction-changing positional nystagmus (DCPN), without latency. Recently, the concept of a “light cupula” in the lateral canal that reveals persistent geotropic DCPN has been introduced. In the present study, we investigated the immediate and short-term therapeutic findings in long-duration DCPN. Design: The authors prospectively compared the therapeutic efficacy of a canalith-repositioning procedure (CRP) in short- and long-duration geotropic DCPN. Results: In patients with long-duration DCPN, the authors found no immediate therapeutic effect, and the number of patients showing short-term effects (on the next day) was very low compared with the comparable figure among those with short-duration DCPN. In addition, no cases exhibited canal conversion after the CRP. Conclusion: Our results suggest that CRP is not useful in patients with long-duration geotropic DCPN, and the pathogenesis of long-duration geotropic DCPN may not originate from free-floating debris but from deflection of the cupula.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHU0T
via IFTTT

Revealing Hearing Loss: A Survey of How People Verbally Disclose Their Hearing Loss

imageObjective: Hearing loss is the most common sensory deficit and congenital anomaly, yet the decision-making processes involved in disclosing hearing loss have been little studied. To address this issue, we have explored the phrases that adults with hearing loss use to disclose their hearing loss. Design: Since self-disclosure research has not focused on hearing loss-specific issues, we created a 15-question survey about verbally disclosing hearing loss. English speaking adults (>18 years old) with hearing loss of any etiology were recruited from otology clinics in a major referral hospital. Three hundred and thirty-seven participants completed the survey instrument. Participants’ phrase(s) used to tell people they have hearing loss were compared across objective characteristics (age; sex; type, degree, and laterality of hearing loss; word recognition scores) and self-reported characteristics (degree of hearing loss; age of onset and years lived with hearing loss; use of technology; hearing handicap score). Results: Participants’ responses revealed three strategies to address hearing loss: Multipurpose disclosure (phrases that disclose hearing loss and provide information to facilitate communication), Basic disclosure (phrases that disclose hearing loss through the term, a label, or details about the condition), or nondisclosure (phrases that do not disclose hearing loss). Variables were compared between patients who used and who did not use each disclosure strategy using χ2 or Wilcoxon rank sum tests. Multipurpose disclosers were mostly female (p = 0.002); had experienced reactions of help, support, and accommodation after disclosing (p = 0.008); and had experienced reactions of being overly helpful after disclosing (p=0.039). Basic disclosers were predominantly male (p = 0.004); reported feeling somewhat more comfortable disclosing their hearing loss over time (p = 0.009); had not experienced reactions of being treated unfairly or discriminated against (p = 0.021); and were diagnosed with mixed hearing loss (p = 0.004). Nondisclosers tended not to disclose in a group setting (p = 0.002) and were diagnosed with bilateral hearing loss (p = 0.005). In addition, all of the variables were examined to build logistic regression models to predict the use of each disclosure strategy. Conclusions: Our results reveal three simple strategies for verbally addressing hearing loss that can be used in a variety of contexts. We recommend educating people with hearing loss about these strategies—this could improve the experience of disclosing hearing loss, and could educate society at large about how to interact with those who have a hearing loss.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XNHS9b
via IFTTT

Interleaved Processors Improve Cochlear Implant Patients’ Spectral Resolution

imageObjective: Cochlear implant patients have difficulty in noisy environments, in part, because of channel interaction. Interleaving the signal by sending every other channel to the opposite ear has the potential to reduce channel interaction by increasing the space between channels in each ear. Interleaving still potentially provides the same amount of spectral information when the two ears are combined. Although this method has been successful in other populations such as hearing aid users, interleaving with cochlear implant patients has not yielded consistent benefits. This may be because perceptual misalignment between the two ears, and the spacing between stimulation locations must be taken into account before interleaving. Design: Eight bilateral cochlear implant users were tested. After perceptually aligning the two ears, 12-channel maps were made that spanned the entire aligned portions of the array. Interleaved maps were created by removing every other channel from each ear. Participants’ spectral resolution and localization abilities were measured with perceptually aligned processing strategies both with and without interleaving. Results: There was a significant improvement in spectral resolution with interleaving. However, there was no significant effect of interleaving on localization abilities. Conclusions: The results indicate that interleaving can improve cochlear implant users’ spectral resolution. However, it may be necessary to perceptually align the two ears and/or use relatively large spacing between stimulation locations.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1SVBLR1
via IFTTT

The Audiometric and Mechanical Effects of Partial Ossicular Discontinuity

imageObjectives: Ossicular discontinuity may be complete, with no contact between the disconnected ends, or partial, where normal contact at an ossicular joint or along a continuous bony segment of an ossicle is replaced by soft tissue or simply by contact of opposing bones. Complete ossicular discontinuity typically results in an audiometric pattern of a large, flat conductive hearing loss. In contrast, in cases where otomicroscopy reveals a normal external ear canal and tympanic membrane, high-frequency conductive hearing loss has been proposed as an indicator of partial ossicular discontinuity. Nevertheless, the diagnostic utility of high-frequency conductive hearing loss has been limited due to gaps in previous research on the subject, and clinicians often assume that an audiogram showing high-frequency conductive hearing loss is flawed. This study aims to improve the diagnostic utility of high-frequency conductive hearing loss in cases of partial ossicular discontinuity by (1) making use of a control population against which to compare the audiometry of partial ossicular discontinuity patients and (2) examining the correlation between high-frequency conductive hearing loss and partial ossicular discontinuity under controlled experimental conditions on fresh cadaveric temporal bones. Furthermore, ear-canal measurements of umbo velocity and wideband acoustic immittance measurements were investigated to determine the usefulness regarding diagnosis of ossicular discontinuity. Design: The authors analyzed audiograms from 66 patients with either form of surgically confirmed ossicular discontinuity and no confounding pathologies. The authors also analyzed umbo velocity (n = 29) and power reflectance (n = 12) measurements from a subset of these patients. Finally, the authors performed experiments on six fresh temporal bone specimens to study the differing mechanical effects of complete and partial discontinuity. The mechanical effects of these lesions were assessed via laser Doppler measurements of stapes velocity. In a subset of the specimen (n = 4), wideband acoustic immittance measurements were also collected. Results: (1) Calculations comparing the air–bone gap (ABG) at high and low frequencies show that when high-frequency ABGs are larger than low-frequency ABGs, the surgeon usually reported soft-tissue bands at the point of discontinuity. However, in cases with larger low-frequency ABGs and flat ABGs across frequencies, some partial discontinuities as well as complete discontinuities were reported. (2) Analysis of umbo velocity and power reflectance (calculated from wideband acoustic immittance) in patients reveal no significant difference across frequencies between the two types of ossicular discontinuities. (3) Temporal bone experiments reveal that partial discontinuity results in a greater loss in stapes velocity at high frequencies when compared with low frequencies, whereas with complete discontinuity, large losses in stapes velocity occur at all frequencies. Conclusion: The clinical and experimental findings suggest that when encountering larger ABGs at high frequencies when compared with low frequencies, partial ossicular discontinuity should be considered in the differential diagnosis.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1mYynGV
via IFTTT

Corneal-Reflection Eye-Tracking Technique for the Assessment of Horizontal Sound Localization Accuracy from 6 Months of Age

imageObjectives: The evaluation of sound localization accuracy (SLA) requires precise behavioral responses from the listener. Such responses are not always possible to elicit in infants and young children, and procedures for the assessment of SLA are time consuming. The aim of this study was to develop a fast, valid, and objective method for the assessment of SLA from 6 months of age. To this end, pupil positions toward spatially distributed continuous auditory and visual stimuli were recorded. Design: Twelve children (29 to 157 weeks of age) who passed the universal newborn hearing screening and eight adults (18 to 40 years of age) who had pure-tone thresholds ≤20 dB HL in both ears participated in this study. Horizontal SLA was measured in a sound field with 12 loudspeaker/display (LD)-pairs placed in an audiological test room at 10 degrees intervals in the frontal horizontal plane (±55 degrees azimuth). An ongoing auditory-visual stimulus was presented at 63 dB SPL(A) and shifted to randomized loudspeakers simultaneously with pauses of the visual stimulus. The visual stimulus was automatically reintroduced at the azimuth of the sounding loudspeaker after a sound-only period of 1.6 sec. A corneal-reflection eye-tracking technique allowed the acquisition of the subjects’ pupil positions relative to the LD-pairs. The perceived azimuth was defined as the median of the intersections between gaze and LD-pairs during the final 500 msec of the sound-only period. Overall SLA was quantified by an Error Index (EI), where EI = 0 corresponded to perfect match between perceived and presented azimuths, whereas EI = 1 corresponded to chance. Results: SLA was rapidly measured in children (mean = 168 sec, n = 12) and adults (mean = 162 sec, n = 8). Visual inspection of gaze data indicated that gaze shifts occurred in sound-only periods. The medians of the perceived sound-source azimuths either coincided with the presenting sound-source azimuth or were offset by a maximum of 20 degrees in children. In contrast, adults revealed a perfect match from −55 to 55 degrees, except at 15 degrees azimuth (median = 20 degrees), with 9/12 of the quartile ranges = 0 degrees. Children showed a mean (SD) EI of 0.42 (0.17), which was significantly higher than that in adults (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1mYyoKP
via IFTTT

The Relation Between Child Versus Parent Report of Chronic Fatigue and Language/Literacy Skills in School-Age Children with Cochlear Implants

imageObjectives: Preliminary evidence suggests that children with hearing loss experience elevated levels of chronic fatigue compared with children with normal hearing. Chronic fatigue is associated with decreased academic performance in many clinical populations. Children with cochlear implants as a group exhibit deficits in language and literacy skills; however, the relation between chronic fatigue and language and literacy skills for children with cochlear implants is unclear. The purpose of this study was to explore subjective ratings of chronic fatigue by children with cochlear implants and their parents, as well as the relation between chronic fatigue and language and literacy skills in this population. Design: Nineteen children with cochlear implants in grades 3 to 6 and one of their parents separately completed a subjective chronic fatigue scale, on which they rated how much the child experienced physical, sleep/rest, and cognitive fatigue over the past month. In addition, children completed an assessment battery that included measures of speech perception, oral language, word reading, and spelling. Results: Children and parents reported different levels of chronic child physical and sleep/rest fatigue. In both cases, parents reported significantly less fatigue than did children. Children and parents did not report different levels of chronic child cognitive fatigue. Child report of physical fatigue was related to speech perception, language, reading, and spelling. Child report of sleep/rest and cognitive fatigue was related to speech perception and language but not to reading or spelling. Parent report of child fatigue was not related to children’s language and literacy skills. Conclusions: Taken as a whole, results suggested that parents under-estimate the fatigue experienced by children with cochlear implants. Child report of physical fatigue was robustly related to language and literacy skills. Children with cochlear implants are likely more accurate at reporting physical fatigue than cognitive fatigue. Clinical practice should take fatigue into account when developing treatment plans for children with cochlear implants, and research should continue to develop a comprehensive model of fatigue in children with cochlear implants.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1mYyoKJ
via IFTTT

Insertion Depth in Cochlear Implantation and Outcome in Residual Hearing and Vestibular Function

imageObjectives: It has long been known that cochlear implantation may cause loss of residual hearing and vestibular function. Different insertion depths may cause varying degrees of intracochlear trauma in the apical region of the cochlea. The present study investigated the correlation between the insertion depth and postoperative loss of residual hearing and vestibular function. Design: Thirty-nine adults underwent unilateral cochlear implantation. One group received a Med-El +Flex24 electrode array (24 mm; n = 4), 1 group received a Med-El +Flex28 electrode array (28 mm; n = 18), and 1 group received a Med-El +FlexSOFT electrode array (31.5 mm; n = 17). Residual hearing, cervical vestibular-evoked myogenic potentials, videonystagmography, and subjective visual vertical/horizontal were explored before and after surgery. The electrode insertion depth and scalar position were examined with high-resolution rotational tomography after implantation in 29 subjects. Results: There was no observed relationship between the angular insertion depth (405° to 708°) and loss of low-frequency pure-tone average. Frequency-specific analysis revealed a weak relationship between the angular insertion depth and loss of hearing at 250 Hz (R2= 0.20; p = 0.02). There was no statistically significant difference in the residual hearing and vestibular function between the +Flex28 and the +FlexSOFT electrode array. Eight percent of the cases had vertigo after surgery. The electrode arrays were positioned inside the scala tympani and not scala vestibuli in all subjects. In 18% of the cases, the +FlexSOFT electrode array was not fully inserted. Conclusions: The final outcome in residual hearing correlates very weakly with the angular insertion depth for depths above 405°. Postoperative loss of vestibular function did not correlate with the angular insertion depth or age at implantation. The surgical protocol used in this study seems to minimize the risk of postoperative vertigo symptoms.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1mYyn9M
via IFTTT

Human Frequency Following Response: Neural Representation of Envelope and Temporal Fine Structure in Listeners with Normal Hearing and Sensorineural Hearing Loss

imageObjective: Listeners with sensorineural hearing loss (SNHL) typically experience reduced speech perception, which is not completely restored with amplification. This likely occurs because cochlear damage, in addition to elevating audiometric thresholds, alters the neural representation of speech transmitted to higher centers along the auditory neuroaxis. While the deleterious effects of SNHL on speech perception in humans have been well-documented using behavioral paradigms, our understanding of the neural correlates underlying these perceptual deficits remains limited. Using the scalp-recorded frequency following response (FFR), the authors examine the effects of SNHL and aging on subcortical neural representation of acoustic features important for pitch and speech perception, namely the periodicity envelope (F0) and temporal fine structure (TFS; formant structure), as reflected in the phase-locked neural activity generating the FFR. Design: FFRs were obtained from 10 listeners with normal hearing (NH) and 9 listeners with mild-moderate SNHL in response to a steady-state English back vowel /u/ presented at multiple intensity levels. Use of multiple presentation levels facilitated comparisons at equal sound pressure level (SPL) and equal sensation level. In a second follow-up experiment to address the effect of age on envelope and TFS representation, FFRs were obtained from 25 NH and 19 listeners with mild to moderately severe SNHL to the same vowel stimulus presented at 80 dB SPL. Temporal waveforms, Fast Fourier Transform and spectrograms were used to evaluate the magnitude of the phase-locked activity at F0 (periodicity envelope) and F1 (TFS). Results: Neural representation of both envelope (F0) and TFS (F1) at equal SPLs was stronger in NH listeners compared with listeners with SNHL. Also, comparison of neural representation of F0 and F1 across stimulus levels expressed in SPL and sensation level (accounting for audibility) revealed that level-related changes in F0 and F1 magnitude were different for listeners with SNHL compared with listeners with NH. Furthermore, the degradation in subcortical neural representation was observed to persist in listeners with SNHL even when the effects of age were controlled for. Conclusions: Overall, our results suggest a relatively greater degradation in the neural representation of TFS compared with periodicity envelope in individuals with SNHL. This degraded neural representation of TFS in SNHL, as reflected in the brainstem FFR, may reflect a disruption in the temporal pattern of phase-locked neural activity arising from altered tonotopic maps and/or wider filters causing poor frequency selectivity in these listeners. Finally, while preliminary results indicate that the deleterious effects of SNHL may be greater than age-related degradation in subcortical neural representation, the lack of a balanced age-matched control group in this study does not permit us to completely rule out the effects of age on subcortical neural representation.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHYgU
via IFTTT

Exploring the Relationship Between Working Memory, Compressor Speed, and Background Noise Characteristics

imageObjectives: Previous work has shown that individuals with lower working memory demonstrate reduced intelligibility for speech processed with fast-acting compression amplification. This relationship has been noted in fluctuating noise, but the extent of noise modulation that must be present to elicit such an effect is unknown. This study expanded on previous study by exploring the effect of background noise modulations in relation to compression speed and working memory ability, using a range of signal to noise ratios. Design: Twenty-six older participants between ages 61 and 90 years were grouped by high or low working memory according to their performance on a reading span test. Speech intelligibility was measured for low-context sentences presented in background noise, where the noise varied in the extent of amplitude modulation. Simulated fast- or slow-acting compression amplification combined with individual frequency-gain shaping was applied to compensate for the individual’s hearing loss. Results: Better speech intelligibility scores were observed for participants with high working memory when fast compression was applied than when slow compression was applied. The low working memory group behaved in the opposite way and performed better under slow compression compared with fast compression. There was also a significant effect of the extent of amplitude modulation in the background noise, such that the magnitude of the score difference (fast versus slow compression) depended on the number of talkers in the background noise. The presented signal to noise ratios were not a significant factor on the measured intelligibility performance. Conclusion: In agreement with earlier research, high working memory allowed better speech intelligibility when fast compression was applied in modulated background noise. In the present experiment, that effect was present regardless of the extent of background noise modulation.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHWpl
via IFTTT

Eliciting Cervical Vestibular-Evoked Myogenic Potentials by Bone-Conducted Vibration via Various Tapping Sites

imageObjectives: This study compared bone-conducted vibration (BCV) cervical vestibular-evoked myogenic potentials (cVEMPs) via tapping at various skull sites in healthy subjects and patients with vestibular migraine (VM) to optimize stimulation conditions. Design: Twenty healthy subjects underwent a series of cVEMP tests by BCV tapping via a minishaker at the Fz (forehead), Cz (vertex), and inion (occiput) sites in a randomized order of tapping sites. Another 20 VM patients were also enrolled in this study for comparison. Results: All 20 healthy subjects had clear BCV cVEMPs when tapping at the inion (100%) or Cz (100%), but not at the Fz (75%). Mean p13 and n23 latencies from the Cz tapping were significantly longer than those from the Fz tapping, but not longer than those from the inion tapping. Unlike healthy subjects, tapping at the Cz (95%) elicited a significantly higher response rate of present cVEMPs than tapping at the inion (78%) in 20 VM patients (40 ears), because seven of nine VM ears with absent cVEMPs by inion tapping turned out to be present cVEMPs by Cz tapping. Conclusions: While both inion and Cz tapping elicited 100% response rate of cVEMPs for healthy individuals, Cz tapping had a higher response rate of cVEMPs than inion tapping for the VM group. In cases of total loss of saccular function, cVEMPs could not be activated by either inion or Cz tapping. However, if residual saccular function remains, Cz tapping may activate saccular afferents more efficiently than inion tapping.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHY0A
via IFTTT

Effects of Reverberation and Compression on Consonant Identification in Individuals with Hearing Impairment

imageObjectives: Hearing aids are frequently used in reverberant environments; however, relatively little is known about how reverberation affects the processing of signals by modern hearing-aid algorithms. The purpose of this study was to investigate the acoustic and behavioral effects of reverberation and wide-dynamic range compression (WDRC) in hearing aids on consonant identification for individuals with hearing impairment. Design: Twenty-three listeners with mild to moderate sloping sensorineural hearing loss were tested monaurally under varying degrees of reverberation and WDRC conditions. Listeners identified consonants embedded within vowel–consonant–vowel nonsense syllables. Stimuli were processed to simulate a range of realistic reverberation times and WDRC release times using virtual acoustic simulations. In addition, the effects of these processing conditions were acoustically analyzed using a model of envelope distortion to examine the effects on the temporal envelope. Results: Aided consonant identification significantly decreased as reverberation time increased. Consonant identification was also significantly affected by WDRC release time. This relationship was such that individuals tended to perform significantly better with longer release times. There was no significant interaction between reverberation and WDRC. The application of the acoustic model to the processed signal showed a close relationship between trends in the behavioral performance and distortion to the temporal envelope resulting from reverberation and WDRC. The results of the acoustic model demonstrated the same trends found in the behavioral data for both reverberation and WDRC. Conclusions: Reverberation and WDRC release time both affect aided consonant identification for individuals with hearing impairment, and these condition effects are associated with alterations to the temporal envelope. There was no significant interaction between reverberation and WDRC release time.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHY0o
via IFTTT

A Randomized Controlled Trial to Evaluate the Benefits of a Multimedia Educational Program for First-Time Hearing Aid Users

imageObjectives: The aims of this study were to (1) develop a series of short interactive videos (or reusable learning objects [RLOs]) covering a broad range of practical and psychosocial issues relevant to the auditory rehabilitation for first-time hearing aid users; (2) establish the accessibility, take-up, acceptability and adherence of the RLOs; and (3) assess the benefits and cost-effectiveness of the RLOs. Design: The study was a single-center, prospective, randomized controlled trial with two arms. The intervention group (RLO+, n = 103) received the RLOs plus standard clinical service including hearing aid(s) and counseling, and the waitlist control group (RLO−, n = 100) received standard clinical service only. The effectiveness of the RLOs was assessed 6-weeks posthearing aid fitting. Seven RLOs (total duration 1 hr) were developed using a participatory, community of practice approach involving hearing aid users and audiologists. RLOs included video clips, illustrations, animations, photos, sounds and testimonials, and all were subtitled. RLOs were delivered through DVD for TV (50.6%) and PC (15.2%), or via the internet (32.9%). Results: RLO take-up was 78%. Adherence overall was at least 67%, and 97% in those who attended the 6-week follow-up. Half the participants watched the RLOs two or more times, suggesting self-management of their hearing loss, hearing aids, and communication. The RLOs were rated as highly useful and the majority of participants agreed the RLOs were enjoyable, improved their confidence and were preferable to written information. Postfitting, there was no significant between-group difference in the primary outcome measure, overall hearing aid use. However, there was significantly greater hearing aid use in the RLO+ group for suboptimal users. Furthermore, the RLO+ group had significantly better knowledge of practical and psychosocial issues, and significantly better practical hearing aid skills than the RLO− group. Conclusions: The RLOs were shown to be beneficial to first-time hearing aid users across a range of quantitative and qualitative measures. This study provides evidence to suggest that the RLOs may provide valuable learning and educational support for first-time hearing aid users and could be used to supplement clinical rehabilitation practice.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVSm
via IFTTT

Factors Predicting Postoperative Unilateral and Bilateral Speech Recognition in Adult Cochlear Implant Recipients with Acoustic Hearing

imageObjectives: The first objective was to examine factors that could be predictive of postoperative unilateral (cochlear implant alone) speech recognition ability in a group of subjects with greater degrees of preoperative acoustic hearing than has been previously examined. Second, the study aimed to identify factors predictive of speech recognition in the best-aided, bilateral listening condition. Design: Participants were 65 postlinguistically hearing-impaired adults with preoperative phoneme in quiet scores of greater than or equal to 46% in one or both ears. Preoperative demographic and audiometric factors were assessed as predictors of 12-month postoperative unilateral and bilateral monosyllabic word scores in quiet and of bilateral speech reception threshold (SRT) in babble. Results: The predictive regression model accounted for 34.1% of the variance in unilateral word recognition scores in quiet. Factors that predicted better scores included: a shorter duration of severe to profound hearing loss in the implanted ear; and poorer pure-tone-averaged thresholds in the contralateral ear. Predictive regression models of postimplantation bilateral function accounted for 36.0% of the variance for word scores in quiet, and 30.9% of the variance for SRT in noise. A shorter duration of severe to profound hearing loss in the implanted ear, a lower age at the time of implantation, and better contralateral hearing thresholds were associated with higher bilateral word recognition in quiet and SRT in noise. Conclusions: In this group of cochlear implant recipients with preoperative acoustic hearing, a shorter duration of severe to profound hearing loss in the implanted ear was shown to be predictive of better unilateral and bilateral outcomes. However, further research is warranted to better understand the impact of that factor in a larger number of subjects with long-term hearing impairment of greater than 30 years. Better contralateral hearing was associated with poorer unilateral word scores with the implanted ear alone, but better absolute bilateral speech recognition. As a result, it is clear that different models would need to be developed to predict unilateral and bilateral postimplantation scores.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVBY
via IFTTT

Within- and Across-Subject Variability of Repeated Measurements of Medial Olivocochlear-Induced Changes in Transient-Evoked Otoacoustic Emissions

imageObjectives: Measurement of changes in transient-evoked otoacoustic emissions (TEOAEs) caused by activation of the medial olivocochlear reflex (MOCR) may have clinical applications, but the clinical utility is dependent in part on the amount of variability across repeated measurements. The purpose of this study was to investigate the within- and across-subject variability of these measurements in a research setting as a step toward determining the potential clinical feasibility of TEOAE-based MOCR measurements. Design: In 24 normal-hearing young adults, TEOAEs were elicited with 35 dB SL clicks and the MOCR was activated by 35 dB SL broadband noise presented contralaterally. Across a 5-week span, changes in both TEOAE amplitude and phase evoked by MOCR activation (MOC shifts) were measured at four sessions, each consisting of four independent measurements. Efforts were undertaken to reduce the effect of potential confounds, including slow drifts in TEOAE amplitude across time, activation of the middle-ear muscle reflex, and changes in subjects’ attentional states. MOC shifts were analyzed in seven 1/6-octave bands from 1 to 2 kHz. The variability of MOC shifts was analyzed at the frequency band yielding the largest and most stable MOC shift at the first session. Within-subject variability was quantified by the size of the standard deviations across all 16 measurements. Across-subject variability was quantified as the range of MOC shift values across subjects and was also described qualitatively through visual analyses of the data. Results: A large majority of MOC shifts in subjects were statistically significant. Most subjects showed stable MOC shifts across time, as evidenced by small standard deviations and by visual clustering of their data. However, some subjects showed within- and across-session variability that could not be explained by changes in hearing status, middle ear status, or attentional state. Simulations indicated that four baseline measurements were sufficient to predict the expected variability of subsequent measurements. However, the measured variability of subsequent MOC shifts in subjects was often larger than expected (based on the variability present at baseline), indicating the presence of additional variability at subsequent sessions. Conclusions: Results indicated that a wide range of within- and across-subject variability of MOC shifts was present in a group of young normal-hearing individuals. In some cases, very large changes in MOC shifts (e.g., 1.5 to 2 dB) would need to occur before one could attribute the change to either an intervention or pathology, rather than to measurement variability. It appears that MOC shifts, as analyzed in the present study, may be too variable for clinical use, at least in some individuals. Further study is needed to determine the extent to which changes in MOC shifts can be reliably measured across time for clinical purposes.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVBL
via IFTTT

Text as a Supplement to Speech in Young and Older Adults

imageObjective: The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, the authors tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. The working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that (1) combining auditory and visual text information will result in improved recognition accuracy compared with auditory or visual text information alone, (2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults, and (3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design: Fifteen young adults with normal hearing, 15 older adults with normal hearing, and 15 older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory- and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results: Both young and older adults performed similarly on 9 out of 10 perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions: These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVln
via IFTTT

Auditory Lexical Decision and Repetition in Children: Effects of Acoustic and Lexical Constraints

imageObjectives: The objective of this study was to identify factors that may detract from children’s ability to identify words they do and do not know. Factors investigated were acoustic constraints stemming from the presence of hearing loss (HL) or an acoustic competitor, and lexical constraints due to an impoverished or cluttered vocabulary. Design: Eleven children with normal hearing (NH) and 11 children with bilateral, mild to moderately severe sensorineural HL were asked to categorize and repeat two-syllable real and nonsense words. Stimuli were amplified and frequency shaped for each child with HL and presented randomly at a level consistent with average conversational speech (65 dB SPL). About half of the children in each group listened in quiet while the other half listened in multitalker babble. In addition to overall performance, responses were judged based on the word category chosen by the child (real or nonsense), the category of the word produced by the child as judged by an examiner (real or nonsense), and the accuracy of the verbal response compared with the stimulus. From these judgments, 10 discrete types of errors were identified. Analyses were conducted for three different combinations of the 10 error categories to best characterize the effects of acoustic and lexical constraints. Results: Performance was highest for real words presented in quiet and poorest for nonsense words presented in multitalker babble. Also, the performance of the children with HL was poorer than that of the children with NH. Error analyses revealed strong effects of acoustic constraints on performance but few effects of lexical constraints. The two most frequently occurring errors were the same for both children with NH and the children with HL and entailed the misperception of nonsense words and the mistaking of nonsense words for real words. However, while both groups of children exhibited these errors in multitalker babble, the children with HL demonstrated these errors in quiet as well. Conclusions: These results suggest that children’s interactions with real and nonsense words are significantly constrained when the acoustic signal is degraded by HL and/or an acoustic competitor. The children’s tendency to repair unknown words into real words in the presence of acoustic interference may be beneficial when perceiving familiar speech, but could also be detrimental if that tendency causes them to miss opportunities to learn new words.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHV4X
via IFTTT

Distribution Characteristics of Air-Bone Gaps: Evidence of Bias in Manual Audiometry

imageObjectives: Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design: The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results: Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions: Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHUht
via IFTTT

Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language

imageObjectives: To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants’ processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Design: Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). Results: (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress patterns in infants with CI was reduced compared with that of infants with NH, (3) both groups showed directional asymmetry in discrimination, that is, increased discrimination from the uncommon to the common stress pattern in Hebrew (/dóti/ versus /dotí/) compared with the reversed condition. Conclusions: The CI device transmitted sufficient acoustic information (amplitude, duration, and fundamental frequency) to allow discrimination between stress patterns in young hearing-impaired infants with CI. The present pattern of results is in support of a discrimination model in which both auditory capabilities and “top–down” interactions are involved. That is, the CI infants detected changes between stressed and unstressed syllables after which they developed a bias for the more common weak–strong stress pattern in Hebrew. The latter suggests that infants with CI were able to extract the statistical distribution of stress patterns by listening to the ambient language even after limited auditory experience with the CI device. To conclude, in relation to processing of lexical stress patterns, infants with CI followed similar developmental milestones as hearing infants thus establishing important prerequisites for early language acquisition.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHUhj
via IFTTT

Asymmetric Hearing Loss in Chinese Workers Exposed to Complex Noise

imageObjectives: Evaluate the audiometric asymmetry in Chinese industrial workers and investigate the effects of noise exposure, sex, and binaural average thresholds on audiometric asymmetry. Design: Data collected from Chinese industrial workers during a cross-sectional study were reanalyzed. Of the 1388 workers, 266 met the inclusion criteria for this study. Each subject underwent a physical examination and an otologic examination and completed a health-related questionnaire. χ2 and t tests were used to examine the differences between the asymmetric and symmetric hearing loss groups. Results: One hundred thirty-one subjects (49.2%) had a binaural hearing threshold difference of 15 dB or more for at least one frequency, and there was no statistically significant difference between the left and right ears. The asymmetric hearing loss group was not exposed to higher cumulative noise levels (t = 0.522, p = 0.602), and there was no dose–response relation between asymmetry and cumulative noise levels (χ2 = 6.502, p = 0.165). Men were 1.849 times more likely to have asymmetry than women were (95% confidence interval, 1.051 to 3.253). Among the workers with higher high-frequency hearing thresholds, audiometric asymmetry was 1.024 times more prevalent than that among those with lower high-frequency hearing thresholds (95% confidence interval, 1.004 to 1.044). Conclusions: The results indicated that occupational noise exposure contributed minimally to asymmetry, whereas sex and binaural average thresholds significantly affected audiometric asymmetry. There was no evidence that the left ears were worse than the right ears.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHS9l
via IFTTT

Immediate and Short-Term Therapeutic Results Between Direction-Changing Positional Nystagmus with Short- and Long-Duration Groups

imageObjectives: Clinicians sometimes treat patients with relatively long-duration geotropic direction-changing positional nystagmus (DCPN), without latency. Recently, the concept of a “light cupula” in the lateral canal that reveals persistent geotropic DCPN has been introduced. In the present study, we investigated the immediate and short-term therapeutic findings in long-duration DCPN. Design: The authors prospectively compared the therapeutic efficacy of a canalith-repositioning procedure (CRP) in short- and long-duration geotropic DCPN. Results: In patients with long-duration DCPN, the authors found no immediate therapeutic effect, and the number of patients showing short-term effects (on the next day) was very low compared with the comparable figure among those with short-duration DCPN. In addition, no cases exhibited canal conversion after the CRP. Conclusion: Our results suggest that CRP is not useful in patients with long-duration geotropic DCPN, and the pathogenesis of long-duration geotropic DCPN may not originate from free-floating debris but from deflection of the cupula.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHU0T
via IFTTT

Revealing Hearing Loss: A Survey of How People Verbally Disclose Their Hearing Loss

imageObjective: Hearing loss is the most common sensory deficit and congenital anomaly, yet the decision-making processes involved in disclosing hearing loss have been little studied. To address this issue, we have explored the phrases that adults with hearing loss use to disclose their hearing loss. Design: Since self-disclosure research has not focused on hearing loss-specific issues, we created a 15-question survey about verbally disclosing hearing loss. English speaking adults (>18 years old) with hearing loss of any etiology were recruited from otology clinics in a major referral hospital. Three hundred and thirty-seven participants completed the survey instrument. Participants’ phrase(s) used to tell people they have hearing loss were compared across objective characteristics (age; sex; type, degree, and laterality of hearing loss; word recognition scores) and self-reported characteristics (degree of hearing loss; age of onset and years lived with hearing loss; use of technology; hearing handicap score). Results: Participants’ responses revealed three strategies to address hearing loss: Multipurpose disclosure (phrases that disclose hearing loss and provide information to facilitate communication), Basic disclosure (phrases that disclose hearing loss through the term, a label, or details about the condition), or nondisclosure (phrases that do not disclose hearing loss). Variables were compared between patients who used and who did not use each disclosure strategy using χ2 or Wilcoxon rank sum tests. Multipurpose disclosers were mostly female (p = 0.002); had experienced reactions of help, support, and accommodation after disclosing (p = 0.008); and had experienced reactions of being overly helpful after disclosing (p=0.039). Basic disclosers were predominantly male (p = 0.004); reported feeling somewhat more comfortable disclosing their hearing loss over time (p = 0.009); had not experienced reactions of being treated unfairly or discriminated against (p = 0.021); and were diagnosed with mixed hearing loss (p = 0.004). Nondisclosers tended not to disclose in a group setting (p = 0.002) and were diagnosed with bilateral hearing loss (p = 0.005). In addition, all of the variables were examined to build logistic regression models to predict the use of each disclosure strategy. Conclusions: Our results reveal three simple strategies for verbally addressing hearing loss that can be used in a variety of contexts. We recommend educating people with hearing loss about these strategies—this could improve the experience of disclosing hearing loss, and could educate society at large about how to interact with those who have a hearing loss.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHS9b
via IFTTT

Interleaved Processors Improve Cochlear Implant Patients’ Spectral Resolution

imageObjective: Cochlear implant patients have difficulty in noisy environments, in part, because of channel interaction. Interleaving the signal by sending every other channel to the opposite ear has the potential to reduce channel interaction by increasing the space between channels in each ear. Interleaving still potentially provides the same amount of spectral information when the two ears are combined. Although this method has been successful in other populations such as hearing aid users, interleaving with cochlear implant patients has not yielded consistent benefits. This may be because perceptual misalignment between the two ears, and the spacing between stimulation locations must be taken into account before interleaving. Design: Eight bilateral cochlear implant users were tested. After perceptually aligning the two ears, 12-channel maps were made that spanned the entire aligned portions of the array. Interleaved maps were created by removing every other channel from each ear. Participants’ spectral resolution and localization abilities were measured with perceptually aligned processing strategies both with and without interleaving. Results: There was a significant improvement in spectral resolution with interleaving. However, there was no significant effect of interleaving on localization abilities. Conclusions: The results indicate that interleaving can improve cochlear implant users’ spectral resolution. However, it may be necessary to perceptually align the two ears and/or use relatively large spacing between stimulation locations.

from #Audiology via ola Kala on Inoreader http://ift.tt/1SVBLR1
via IFTTT

The Audiometric and Mechanical Effects of Partial Ossicular Discontinuity

imageObjectives: Ossicular discontinuity may be complete, with no contact between the disconnected ends, or partial, where normal contact at an ossicular joint or along a continuous bony segment of an ossicle is replaced by soft tissue or simply by contact of opposing bones. Complete ossicular discontinuity typically results in an audiometric pattern of a large, flat conductive hearing loss. In contrast, in cases where otomicroscopy reveals a normal external ear canal and tympanic membrane, high-frequency conductive hearing loss has been proposed as an indicator of partial ossicular discontinuity. Nevertheless, the diagnostic utility of high-frequency conductive hearing loss has been limited due to gaps in previous research on the subject, and clinicians often assume that an audiogram showing high-frequency conductive hearing loss is flawed. This study aims to improve the diagnostic utility of high-frequency conductive hearing loss in cases of partial ossicular discontinuity by (1) making use of a control population against which to compare the audiometry of partial ossicular discontinuity patients and (2) examining the correlation between high-frequency conductive hearing loss and partial ossicular discontinuity under controlled experimental conditions on fresh cadaveric temporal bones. Furthermore, ear-canal measurements of umbo velocity and wideband acoustic immittance measurements were investigated to determine the usefulness regarding diagnosis of ossicular discontinuity. Design: The authors analyzed audiograms from 66 patients with either form of surgically confirmed ossicular discontinuity and no confounding pathologies. The authors also analyzed umbo velocity (n = 29) and power reflectance (n = 12) measurements from a subset of these patients. Finally, the authors performed experiments on six fresh temporal bone specimens to study the differing mechanical effects of complete and partial discontinuity. The mechanical effects of these lesions were assessed via laser Doppler measurements of stapes velocity. In a subset of the specimen (n = 4), wideband acoustic immittance measurements were also collected. Results: (1) Calculations comparing the air–bone gap (ABG) at high and low frequencies show that when high-frequency ABGs are larger than low-frequency ABGs, the surgeon usually reported soft-tissue bands at the point of discontinuity. However, in cases with larger low-frequency ABGs and flat ABGs across frequencies, some partial discontinuities as well as complete discontinuities were reported. (2) Analysis of umbo velocity and power reflectance (calculated from wideband acoustic immittance) in patients reveal no significant difference across frequencies between the two types of ossicular discontinuities. (3) Temporal bone experiments reveal that partial discontinuity results in a greater loss in stapes velocity at high frequencies when compared with low frequencies, whereas with complete discontinuity, large losses in stapes velocity occur at all frequencies. Conclusion: The clinical and experimental findings suggest that when encountering larger ABGs at high frequencies when compared with low frequencies, partial ossicular discontinuity should be considered in the differential diagnosis.

from #Audiology via ola Kala on Inoreader http://ift.tt/1mYynGV
via IFTTT

Corneal-Reflection Eye-Tracking Technique for the Assessment of Horizontal Sound Localization Accuracy from 6 Months of Age

imageObjectives: The evaluation of sound localization accuracy (SLA) requires precise behavioral responses from the listener. Such responses are not always possible to elicit in infants and young children, and procedures for the assessment of SLA are time consuming. The aim of this study was to develop a fast, valid, and objective method for the assessment of SLA from 6 months of age. To this end, pupil positions toward spatially distributed continuous auditory and visual stimuli were recorded. Design: Twelve children (29 to 157 weeks of age) who passed the universal newborn hearing screening and eight adults (18 to 40 years of age) who had pure-tone thresholds ≤20 dB HL in both ears participated in this study. Horizontal SLA was measured in a sound field with 12 loudspeaker/display (LD)-pairs placed in an audiological test room at 10 degrees intervals in the frontal horizontal plane (±55 degrees azimuth). An ongoing auditory-visual stimulus was presented at 63 dB SPL(A) and shifted to randomized loudspeakers simultaneously with pauses of the visual stimulus. The visual stimulus was automatically reintroduced at the azimuth of the sounding loudspeaker after a sound-only period of 1.6 sec. A corneal-reflection eye-tracking technique allowed the acquisition of the subjects’ pupil positions relative to the LD-pairs. The perceived azimuth was defined as the median of the intersections between gaze and LD-pairs during the final 500 msec of the sound-only period. Overall SLA was quantified by an Error Index (EI), where EI = 0 corresponded to perfect match between perceived and presented azimuths, whereas EI = 1 corresponded to chance. Results: SLA was rapidly measured in children (mean = 168 sec, n = 12) and adults (mean = 162 sec, n = 8). Visual inspection of gaze data indicated that gaze shifts occurred in sound-only periods. The medians of the perceived sound-source azimuths either coincided with the presenting sound-source azimuth or were offset by a maximum of 20 degrees in children. In contrast, adults revealed a perfect match from −55 to 55 degrees, except at 15 degrees azimuth (median = 20 degrees), with 9/12 of the quartile ranges = 0 degrees. Children showed a mean (SD) EI of 0.42 (0.17), which was significantly higher than that in adults (p

from #Audiology via ola Kala on Inoreader http://ift.tt/1mYyoKP
via IFTTT

The Relation Between Child Versus Parent Report of Chronic Fatigue and Language/Literacy Skills in School-Age Children with Cochlear Implants

imageObjectives: Preliminary evidence suggests that children with hearing loss experience elevated levels of chronic fatigue compared with children with normal hearing. Chronic fatigue is associated with decreased academic performance in many clinical populations. Children with cochlear implants as a group exhibit deficits in language and literacy skills; however, the relation between chronic fatigue and language and literacy skills for children with cochlear implants is unclear. The purpose of this study was to explore subjective ratings of chronic fatigue by children with cochlear implants and their parents, as well as the relation between chronic fatigue and language and literacy skills in this population. Design: Nineteen children with cochlear implants in grades 3 to 6 and one of their parents separately completed a subjective chronic fatigue scale, on which they rated how much the child experienced physical, sleep/rest, and cognitive fatigue over the past month. In addition, children completed an assessment battery that included measures of speech perception, oral language, word reading, and spelling. Results: Children and parents reported different levels of chronic child physical and sleep/rest fatigue. In both cases, parents reported significantly less fatigue than did children. Children and parents did not report different levels of chronic child cognitive fatigue. Child report of physical fatigue was related to speech perception, language, reading, and spelling. Child report of sleep/rest and cognitive fatigue was related to speech perception and language but not to reading or spelling. Parent report of child fatigue was not related to children’s language and literacy skills. Conclusions: Taken as a whole, results suggested that parents under-estimate the fatigue experienced by children with cochlear implants. Child report of physical fatigue was robustly related to language and literacy skills. Children with cochlear implants are likely more accurate at reporting physical fatigue than cognitive fatigue. Clinical practice should take fatigue into account when developing treatment plans for children with cochlear implants, and research should continue to develop a comprehensive model of fatigue in children with cochlear implants.

from #Audiology via ola Kala on Inoreader http://ift.tt/1mYyoKJ
via IFTTT

Insertion Depth in Cochlear Implantation and Outcome in Residual Hearing and Vestibular Function

imageObjectives: It has long been known that cochlear implantation may cause loss of residual hearing and vestibular function. Different insertion depths may cause varying degrees of intracochlear trauma in the apical region of the cochlea. The present study investigated the correlation between the insertion depth and postoperative loss of residual hearing and vestibular function. Design: Thirty-nine adults underwent unilateral cochlear implantation. One group received a Med-El +Flex24 electrode array (24 mm; n = 4), 1 group received a Med-El +Flex28 electrode array (28 mm; n = 18), and 1 group received a Med-El +FlexSOFT electrode array (31.5 mm; n = 17). Residual hearing, cervical vestibular-evoked myogenic potentials, videonystagmography, and subjective visual vertical/horizontal were explored before and after surgery. The electrode insertion depth and scalar position were examined with high-resolution rotational tomography after implantation in 29 subjects. Results: There was no observed relationship between the angular insertion depth (405° to 708°) and loss of low-frequency pure-tone average. Frequency-specific analysis revealed a weak relationship between the angular insertion depth and loss of hearing at 250 Hz (R2= 0.20; p = 0.02). There was no statistically significant difference in the residual hearing and vestibular function between the +Flex28 and the +FlexSOFT electrode array. Eight percent of the cases had vertigo after surgery. The electrode arrays were positioned inside the scala tympani and not scala vestibuli in all subjects. In 18% of the cases, the +FlexSOFT electrode array was not fully inserted. Conclusions: The final outcome in residual hearing correlates very weakly with the angular insertion depth for depths above 405°. Postoperative loss of vestibular function did not correlate with the angular insertion depth or age at implantation. The surgical protocol used in this study seems to minimize the risk of postoperative vertigo symptoms.

from #Audiology via ola Kala on Inoreader http://ift.tt/1mYyn9M
via IFTTT

A Randomized Controlled Trial to Evaluate the Benefits of a Multimedia Educational Program for First-Time Hearing Aid Users

imageObjectives: The aims of this study were to (1) develop a series of short interactive videos (or reusable learning objects [RLOs]) covering a broad range of practical and psychosocial issues relevant to the auditory rehabilitation for first-time hearing aid users; (2) establish the accessibility, take-up, acceptability and adherence of the RLOs; and (3) assess the benefits and cost-effectiveness of the RLOs. Design: The study was a single-center, prospective, randomized controlled trial with two arms. The intervention group (RLO+, n = 103) received the RLOs plus standard clinical service including hearing aid(s) and counseling, and the waitlist control group (RLO−, n = 100) received standard clinical service only. The effectiveness of the RLOs was assessed 6-weeks posthearing aid fitting. Seven RLOs (total duration 1 hr) were developed using a participatory, community of practice approach involving hearing aid users and audiologists. RLOs included video clips, illustrations, animations, photos, sounds and testimonials, and all were subtitled. RLOs were delivered through DVD for TV (50.6%) and PC (15.2%), or via the internet (32.9%). Results: RLO take-up was 78%. Adherence overall was at least 67%, and 97% in those who attended the 6-week follow-up. Half the participants watched the RLOs two or more times, suggesting self-management of their hearing loss, hearing aids, and communication. The RLOs were rated as highly useful and the majority of participants agreed the RLOs were enjoyable, improved their confidence and were preferable to written information. Postfitting, there was no significant between-group difference in the primary outcome measure, overall hearing aid use. However, there was significantly greater hearing aid use in the RLO+ group for suboptimal users. Furthermore, the RLO+ group had significantly better knowledge of practical and psychosocial issues, and significantly better practical hearing aid skills than the RLO− group. Conclusions: The RLOs were shown to be beneficial to first-time hearing aid users across a range of quantitative and qualitative measures. This study provides evidence to suggest that the RLOs may provide valuable learning and educational support for first-time hearing aid users and could be used to supplement clinical rehabilitation practice.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVSm
via IFTTT

Factors Predicting Postoperative Unilateral and Bilateral Speech Recognition in Adult Cochlear Implant Recipients with Acoustic Hearing

imageObjectives: The first objective was to examine factors that could be predictive of postoperative unilateral (cochlear implant alone) speech recognition ability in a group of subjects with greater degrees of preoperative acoustic hearing than has been previously examined. Second, the study aimed to identify factors predictive of speech recognition in the best-aided, bilateral listening condition. Design: Participants were 65 postlinguistically hearing-impaired adults with preoperative phoneme in quiet scores of greater than or equal to 46% in one or both ears. Preoperative demographic and audiometric factors were assessed as predictors of 12-month postoperative unilateral and bilateral monosyllabic word scores in quiet and of bilateral speech reception threshold (SRT) in babble. Results: The predictive regression model accounted for 34.1% of the variance in unilateral word recognition scores in quiet. Factors that predicted better scores included: a shorter duration of severe to profound hearing loss in the implanted ear; and poorer pure-tone-averaged thresholds in the contralateral ear. Predictive regression models of postimplantation bilateral function accounted for 36.0% of the variance for word scores in quiet, and 30.9% of the variance for SRT in noise. A shorter duration of severe to profound hearing loss in the implanted ear, a lower age at the time of implantation, and better contralateral hearing thresholds were associated with higher bilateral word recognition in quiet and SRT in noise. Conclusions: In this group of cochlear implant recipients with preoperative acoustic hearing, a shorter duration of severe to profound hearing loss in the implanted ear was shown to be predictive of better unilateral and bilateral outcomes. However, further research is warranted to better understand the impact of that factor in a larger number of subjects with long-term hearing impairment of greater than 30 years. Better contralateral hearing was associated with poorer unilateral word scores with the implanted ear alone, but better absolute bilateral speech recognition. As a result, it is clear that different models would need to be developed to predict unilateral and bilateral postimplantation scores.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVBY
via IFTTT

Within- and Across-Subject Variability of Repeated Measurements of Medial Olivocochlear-Induced Changes in Transient-Evoked Otoacoustic Emissions

imageObjectives: Measurement of changes in transient-evoked otoacoustic emissions (TEOAEs) caused by activation of the medial olivocochlear reflex (MOCR) may have clinical applications, but the clinical utility is dependent in part on the amount of variability across repeated measurements. The purpose of this study was to investigate the within- and across-subject variability of these measurements in a research setting as a step toward determining the potential clinical feasibility of TEOAE-based MOCR measurements. Design: In 24 normal-hearing young adults, TEOAEs were elicited with 35 dB SL clicks and the MOCR was activated by 35 dB SL broadband noise presented contralaterally. Across a 5-week span, changes in both TEOAE amplitude and phase evoked by MOCR activation (MOC shifts) were measured at four sessions, each consisting of four independent measurements. Efforts were undertaken to reduce the effect of potential confounds, including slow drifts in TEOAE amplitude across time, activation of the middle-ear muscle reflex, and changes in subjects’ attentional states. MOC shifts were analyzed in seven 1/6-octave bands from 1 to 2 kHz. The variability of MOC shifts was analyzed at the frequency band yielding the largest and most stable MOC shift at the first session. Within-subject variability was quantified by the size of the standard deviations across all 16 measurements. Across-subject variability was quantified as the range of MOC shift values across subjects and was also described qualitatively through visual analyses of the data. Results: A large majority of MOC shifts in subjects were statistically significant. Most subjects showed stable MOC shifts across time, as evidenced by small standard deviations and by visual clustering of their data. However, some subjects showed within- and across-session variability that could not be explained by changes in hearing status, middle ear status, or attentional state. Simulations indicated that four baseline measurements were sufficient to predict the expected variability of subsequent measurements. However, the measured variability of subsequent MOC shifts in subjects was often larger than expected (based on the variability present at baseline), indicating the presence of additional variability at subsequent sessions. Conclusions: Results indicated that a wide range of within- and across-subject variability of MOC shifts was present in a group of young normal-hearing individuals. In some cases, very large changes in MOC shifts (e.g., 1.5 to 2 dB) would need to occur before one could attribute the change to either an intervention or pathology, rather than to measurement variability. It appears that MOC shifts, as analyzed in the present study, may be too variable for clinical use, at least in some individuals. Further study is needed to determine the extent to which changes in MOC shifts can be reliably measured across time for clinical purposes.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVBL
via IFTTT

Text as a Supplement to Speech in Young and Older Adults

imageObjective: The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, the authors tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. The working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that (1) combining auditory and visual text information will result in improved recognition accuracy compared with auditory or visual text information alone, (2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults, and (3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design: Fifteen young adults with normal hearing, 15 older adults with normal hearing, and 15 older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory- and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results: Both young and older adults performed similarly on 9 out of 10 perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions: These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHVln
via IFTTT

Auditory Lexical Decision and Repetition in Children: Effects of Acoustic and Lexical Constraints

imageObjectives: The objective of this study was to identify factors that may detract from children’s ability to identify words they do and do not know. Factors investigated were acoustic constraints stemming from the presence of hearing loss (HL) or an acoustic competitor, and lexical constraints due to an impoverished or cluttered vocabulary. Design: Eleven children with normal hearing (NH) and 11 children with bilateral, mild to moderately severe sensorineural HL were asked to categorize and repeat two-syllable real and nonsense words. Stimuli were amplified and frequency shaped for each child with HL and presented randomly at a level consistent with average conversational speech (65 dB SPL). About half of the children in each group listened in quiet while the other half listened in multitalker babble. In addition to overall performance, responses were judged based on the word category chosen by the child (real or nonsense), the category of the word produced by the child as judged by an examiner (real or nonsense), and the accuracy of the verbal response compared with the stimulus. From these judgments, 10 discrete types of errors were identified. Analyses were conducted for three different combinations of the 10 error categories to best characterize the effects of acoustic and lexical constraints. Results: Performance was highest for real words presented in quiet and poorest for nonsense words presented in multitalker babble. Also, the performance of the children with HL was poorer than that of the children with NH. Error analyses revealed strong effects of acoustic constraints on performance but few effects of lexical constraints. The two most frequently occurring errors were the same for both children with NH and the children with HL and entailed the misperception of nonsense words and the mistaking of nonsense words for real words. However, while both groups of children exhibited these errors in multitalker babble, the children with HL demonstrated these errors in quiet as well. Conclusions: These results suggest that children’s interactions with real and nonsense words are significantly constrained when the acoustic signal is degraded by HL and/or an acoustic competitor. The children’s tendency to repair unknown words into real words in the presence of acoustic interference may be beneficial when perceiving familiar speech, but could also be detrimental if that tendency causes them to miss opportunities to learn new words.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHV4X
via IFTTT

Distribution Characteristics of Air-Bone Gaps: Evidence of Bias in Manual Audiometry

imageObjectives: Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design: The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results: Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions: Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XNHUht
via IFTTT