Τετάρτη 16 Δεκεμβρίου 2015

Clinical Validity of hearScreen™ Smartphone Hearing Screening for School Children

imageObjectives: The study aimed to determine the validity of a smartphone hearing screening technology (hearScreen™) compared with conventional screening audiometry in terms of (1) sensitivity and specificity, (2) referral rate, and (3) test time. Design: One thousand and seventy school-age children in grades 1 to 3 (8 ± 1.1 average years) were recruited from five public schools. Children were screened twice, once using conventional audiometry and once with the smartphone hearing screening. Screening was conducted in a counterbalanced sequence, alternating initial screen between conventional or smartphone hearing screening. Results: No statistically significant difference in performance between techniques was noted, with smartphone screening demonstrating equivalent sensitivity (75.0%) and specificity (98.5%) to conventional screening audiometry. While referral rates were lower with the smartphone screening (3.2 vs. 4.6%), it was not significantly different (p > 0.05). Smartphone screening (hearScreen™) was 12.3% faster than conventional screening. Conclusion: Smartphone hearing screening using the hearScreen™ application is accurate and time efficient.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bj3s
via IFTTT

Clinical Validity of hearScreen™ Smartphone Hearing Screening for School Children

imageObjectives: The study aimed to determine the validity of a smartphone hearing screening technology (hearScreen™) compared with conventional screening audiometry in terms of (1) sensitivity and specificity, (2) referral rate, and (3) test time. Design: One thousand and seventy school-age children in grades 1 to 3 (8 ± 1.1 average years) were recruited from five public schools. Children were screened twice, once using conventional audiometry and once with the smartphone hearing screening. Screening was conducted in a counterbalanced sequence, alternating initial screen between conventional or smartphone hearing screening. Results: No statistically significant difference in performance between techniques was noted, with smartphone screening demonstrating equivalent sensitivity (75.0%) and specificity (98.5%) to conventional screening audiometry. While referral rates were lower with the smartphone screening (3.2 vs. 4.6%), it was not significantly different (p > 0.05). Smartphone screening (hearScreen™) was 12.3% faster than conventional screening. Conclusion: Smartphone hearing screening using the hearScreen™ application is accurate and time efficient.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bj3s
via IFTTT

Audiometric Characteristics of a Dutch DFNA10 Family With Mid-Frequency Hearing Impairment

imageObjectives: Mutations in EYA4 can cause nonsyndromic autosomal dominant sensorineural hearing impairment (DFNA10) or a syndromic variant with hearing impairment and dilated cardiomyopathy. A mutation in EYA4 was found in a Dutch family, causing DFNA10. This study is focused on characterizing the hearing impairment in this family. Design: Whole exome sequencing was performed in the proband. In addition, peripheral blood samples were collected from 23 family members, and segregation analyses were performed. All participants underwent otorhinolaryngological examinations and pure-tone audiometry, and 12 participants underwent speech audiometry. In addition, an extended set of audiometric measurements was performed in five family members to evaluate the functional status of the cochlea. Vestibular testing was performed in three family members. Two individuals underwent echocardiography to evaluate the nonsyndromic phenotype. Results: The authors present a Dutch family with a truncating mutation in EYA4 causing a mid-frequency hearing impairment. This mutation (c.464del) leads to a frameshift and a premature stop codon (p.Pro155fsX). This mutation is the most N-terminal mutation in EYA4 found to date. In addition, a missense mutation, predicted to be deleterious, was found in EYA4 in two family members. Echocardiography in two family members revealed no signs of dilated cardiomyopathy. Results of caloric and velocity step tests in three family members showed no abnormalities. Hearing impairment was found to be symmetric and progressive, beginning as a mid-frequency hearing impairment in childhood and developing into a high-frequency, moderate hearing impairment later in life. Furthermore, an extended set of audiometric measurements was performed in five family members. The results were comparable to those obtained in patients with other sensory types of hearing impairments, such as patients with Usher syndrome type IIA and presbyacusis, and not to those obtained in patients with (cochlear) conductive types of hearing impairment, such as DFNA8/12 and DFNA13. Conclusions: The mid-frequency hearing impairment in the present family was found to be symmetric and progressive, with a predominantly childhood onset. The results of psychophysical measurements revealed similarities to other conditions involving a sensory type of hearing impairment, such as Usher syndrome type IIA and presbyacusis. The study results suggest that EYA4 is expressed in the sensory cells of the cochlea. This phenotypic description will facilitate counseling for hearing impairment in DFNA10 patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bhc1
via IFTTT

Audiometric Characteristics of a Dutch DFNA10 Family With Mid-Frequency Hearing Impairment

imageObjectives: Mutations in EYA4 can cause nonsyndromic autosomal dominant sensorineural hearing impairment (DFNA10) or a syndromic variant with hearing impairment and dilated cardiomyopathy. A mutation in EYA4 was found in a Dutch family, causing DFNA10. This study is focused on characterizing the hearing impairment in this family. Design: Whole exome sequencing was performed in the proband. In addition, peripheral blood samples were collected from 23 family members, and segregation analyses were performed. All participants underwent otorhinolaryngological examinations and pure-tone audiometry, and 12 participants underwent speech audiometry. In addition, an extended set of audiometric measurements was performed in five family members to evaluate the functional status of the cochlea. Vestibular testing was performed in three family members. Two individuals underwent echocardiography to evaluate the nonsyndromic phenotype. Results: The authors present a Dutch family with a truncating mutation in EYA4 causing a mid-frequency hearing impairment. This mutation (c.464del) leads to a frameshift and a premature stop codon (p.Pro155fsX). This mutation is the most N-terminal mutation in EYA4 found to date. In addition, a missense mutation, predicted to be deleterious, was found in EYA4 in two family members. Echocardiography in two family members revealed no signs of dilated cardiomyopathy. Results of caloric and velocity step tests in three family members showed no abnormalities. Hearing impairment was found to be symmetric and progressive, beginning as a mid-frequency hearing impairment in childhood and developing into a high-frequency, moderate hearing impairment later in life. Furthermore, an extended set of audiometric measurements was performed in five family members. The results were comparable to those obtained in patients with other sensory types of hearing impairments, such as patients with Usher syndrome type IIA and presbyacusis, and not to those obtained in patients with (cochlear) conductive types of hearing impairment, such as DFNA8/12 and DFNA13. Conclusions: The mid-frequency hearing impairment in the present family was found to be symmetric and progressive, with a predominantly childhood onset. The results of psychophysical measurements revealed similarities to other conditions involving a sensory type of hearing impairment, such as Usher syndrome type IIA and presbyacusis. The study results suggest that EYA4 is expressed in the sensory cells of the cochlea. This phenotypic description will facilitate counseling for hearing impairment in DFNA10 patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bhc1
via IFTTT

Word Recognition Variability With Cochlear Implants: “Perceptual Attention” Versus “Auditory Sensitivity”

imageObjectives: Cochlear implantation does not automatically result in robust spoken language understanding for postlingually deafened adults. Enormous outcome variability exists, related to the complexity of understanding spoken language through cochlear implants (CIs), which deliver degraded speech representations. This investigation examined variability in word recognition as explained by “perceptual attention” and “auditory sensitivity” to acoustic cues underlying speech perception. Design: Thirty postlingually deafened adults with CIs and 20 age-matched controls with normal hearing (NH) were tested. Participants underwent assessment of word recognition in quiet and perceptual attention (cue-weighting strategies) based on labeling tasks for two phonemic contrasts: (1) “cop”–“cob,” based on a duration cue (easily accessible through CIs) or a dynamic spectral cue (less accessible through CIs), and (2) “sa”–“sha,” based on static or dynamic spectral cues (both potentially poorly accessible through CIs). Participants were also assessed for auditory sensitivity to the speech cues underlying those labeling decisions. Results: Word recognition varied widely among CI users (20 to 96%), but it was generally poorer than for NH participants. Implant users and NH controls showed similar perceptual attention and auditory sensitivity to the duration cue, while CI users showed poorer attention and sensitivity to all spectral cues. Both attention and sensitivity to spectral cues predicted variability in word recognition. Conclusions: For CI users, both perceptual attention and auditory sensitivity are important in word recognition. Efforts should be made to better represent spectral cues through implants, while also facilitating attention to these cues through auditory training.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eBF6
via IFTTT

Word Recognition Variability With Cochlear Implants: “Perceptual Attention” Versus “Auditory Sensitivity”

imageObjectives: Cochlear implantation does not automatically result in robust spoken language understanding for postlingually deafened adults. Enormous outcome variability exists, related to the complexity of understanding spoken language through cochlear implants (CIs), which deliver degraded speech representations. This investigation examined variability in word recognition as explained by “perceptual attention” and “auditory sensitivity” to acoustic cues underlying speech perception. Design: Thirty postlingually deafened adults with CIs and 20 age-matched controls with normal hearing (NH) were tested. Participants underwent assessment of word recognition in quiet and perceptual attention (cue-weighting strategies) based on labeling tasks for two phonemic contrasts: (1) “cop”–“cob,” based on a duration cue (easily accessible through CIs) or a dynamic spectral cue (less accessible through CIs), and (2) “sa”–“sha,” based on static or dynamic spectral cues (both potentially poorly accessible through CIs). Participants were also assessed for auditory sensitivity to the speech cues underlying those labeling decisions. Results: Word recognition varied widely among CI users (20 to 96%), but it was generally poorer than for NH participants. Implant users and NH controls showed similar perceptual attention and auditory sensitivity to the duration cue, while CI users showed poorer attention and sensitivity to all spectral cues. Both attention and sensitivity to spectral cues predicted variability in word recognition. Conclusions: For CI users, both perceptual attention and auditory sensitivity are important in word recognition. Efforts should be made to better represent spectral cues through implants, while also facilitating attention to these cues through auditory training.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eBF6
via IFTTT

The Importance of Acoustic Temporal Fine Structure Cues in Different Spectral Regions for Mandarin Sentence Recognition

imageObjectives: To study the relative contribution of acoustic temporal fine structure (TFS) cues in low-, mid-, and high-frequency regions to Mandarin sentence recognition. Design: Twenty-one subjects with normal hearing were involved in a study of Mandarin sentence recognition using acoustic TFS. The acoustic TFS information was extracted from 10 3-equivalent rectangular bandwidth-wide bands within the range 80 to 8858 Hz using the Hilbert transform and was assigned to low-, mid-, and high-frequency regions. Percent-correct recognition scores were obtained with acoustic TFS information presented using one, two, or three frequency regions. The relative weights of the three frequency regions were calculated using the least-squares approach. Results: Results indicated that the mean percent-correct scores for sentence recognition using acoustic TFS were nearly perfect for stimuli with all three frequency regions together. Recognition was approximately 50 to 60% correct with only the low- or mid-frequency region but decreased to approximately 5% correct with only the high-frequency region of acoustic TFS. The mean weights of the low-, mid-, and high-frequency regions were 0.39, 0.48, and 0.13, respectively, and the difference between each pair of frequency regions was statistically significant. Conclusion: The acoustic TFS cues in low- and mid-frequency regions convey greater information for Mandarin sentence recognition, whereas those in the high-frequency region have little effect.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eBF4
via IFTTT

The Importance of Acoustic Temporal Fine Structure Cues in Different Spectral Regions for Mandarin Sentence Recognition

imageObjectives: To study the relative contribution of acoustic temporal fine structure (TFS) cues in low-, mid-, and high-frequency regions to Mandarin sentence recognition. Design: Twenty-one subjects with normal hearing were involved in a study of Mandarin sentence recognition using acoustic TFS. The acoustic TFS information was extracted from 10 3-equivalent rectangular bandwidth-wide bands within the range 80 to 8858 Hz using the Hilbert transform and was assigned to low-, mid-, and high-frequency regions. Percent-correct recognition scores were obtained with acoustic TFS information presented using one, two, or three frequency regions. The relative weights of the three frequency regions were calculated using the least-squares approach. Results: Results indicated that the mean percent-correct scores for sentence recognition using acoustic TFS were nearly perfect for stimuli with all three frequency regions together. Recognition was approximately 50 to 60% correct with only the low- or mid-frequency region but decreased to approximately 5% correct with only the high-frequency region of acoustic TFS. The mean weights of the low-, mid-, and high-frequency regions were 0.39, 0.48, and 0.13, respectively, and the difference between each pair of frequency regions was statistically significant. Conclusion: The acoustic TFS cues in low- and mid-frequency regions convey greater information for Mandarin sentence recognition, whereas those in the high-frequency region have little effect.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eBF4
via IFTTT

Evaluation of a Modified User Guide for Hearing Aid Management

imageObjectives: This study investigated if a hearing aid user guide modified using best practice principles for health literacy resulted in superior ability to perform hearing aid management tasks, compared with the user guide in the original form. Design: This research utilized a two-arm study design to compare the original manufacturer’s user guide with a modified user guide for the same hearing aid—an Oticon Acto behind-the-ear aid with an open dome. The modified user guide had a lower reading grade level (4.2 versus 10.5), used a larger font size, included more graphics, and had less technical information. Eighty-nine adults ages 55 years and over were included in the study; none had experience with hearing aid use or management. Participants were randomly assigned either the modified guide (n = 47) or the original guide (n = 42). All participants were administered the Hearing Aid Management test, designed for this study, which assessed their ability to perform seven management tasks (e.g., change battery) with their assigned user guide. Results: The regression analysis indicated that the type of user guide was significantly associated with performance on the Hearing Aid Management test, adjusting for 11 potential covariates. In addition, participants assigned the modified guide required significantly fewer prompts to perform tasks and were significantly more likely to perform four of the seven tasks without the need for prompts. The median time taken by those assigned the modified guide was also significantly shorter for three of the tasks. Other variables associated with performance on the Hearing Aid Management test were health literacy level, finger dexterity, and age. Conclusions: Findings indicate that the need to design hearing aid user guides in line with best practice principles of health literacy as a means of facilitating improved hearing aid management in older adults.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eBoM
via IFTTT

Evaluation of a Modified User Guide for Hearing Aid Management

imageObjectives: This study investigated if a hearing aid user guide modified using best practice principles for health literacy resulted in superior ability to perform hearing aid management tasks, compared with the user guide in the original form. Design: This research utilized a two-arm study design to compare the original manufacturer’s user guide with a modified user guide for the same hearing aid—an Oticon Acto behind-the-ear aid with an open dome. The modified user guide had a lower reading grade level (4.2 versus 10.5), used a larger font size, included more graphics, and had less technical information. Eighty-nine adults ages 55 years and over were included in the study; none had experience with hearing aid use or management. Participants were randomly assigned either the modified guide (n = 47) or the original guide (n = 42). All participants were administered the Hearing Aid Management test, designed for this study, which assessed their ability to perform seven management tasks (e.g., change battery) with their assigned user guide. Results: The regression analysis indicated that the type of user guide was significantly associated with performance on the Hearing Aid Management test, adjusting for 11 potential covariates. In addition, participants assigned the modified guide required significantly fewer prompts to perform tasks and were significantly more likely to perform four of the seven tasks without the need for prompts. The median time taken by those assigned the modified guide was also significantly shorter for three of the tasks. Other variables associated with performance on the Hearing Aid Management test were health literacy level, finger dexterity, and age. Conclusions: Findings indicate that the need to design hearing aid user guides in line with best practice principles of health literacy as a means of facilitating improved hearing aid management in older adults.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eBoM
via IFTTT

Acoustic Immittance Measures: Basic and Advanced Practice

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eCsT
via IFTTT

Acoustic Immittance Measures: Basic and Advanced Practice

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eCsT
via IFTTT

Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder

imageObjectives: Children with auditory processing disorder (APD) typically present with “listening difficulties,”’ including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. Design: This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children’s Auditory Performance Scale that assesses listening skills, completed by the children’s teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. Results: All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children’s Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Conclusions: Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BgEO
via IFTTT

Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder

imageObjectives: Children with auditory processing disorder (APD) typically present with “listening difficulties,”’ including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. Design: This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children’s Auditory Performance Scale that assesses listening skills, completed by the children’s teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. Results: All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children’s Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Conclusions: Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BgEO
via IFTTT

The Effect of Functional Hearing and Hearing Aid Usage on Verbal Reasoning in a Large Community-Dwelling Population

imageObjectives: Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. Design: Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. Results: Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Big5
via IFTTT

The Effect of Functional Hearing and Hearing Aid Usage on Verbal Reasoning in a Large Community-Dwelling Population

imageObjectives: Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. Design: Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. Results: Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Big5
via IFTTT

Masking Period Patterns and Forward Masking for Speech-Shaped Noise: Age-Related Effects

imageObjective: The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to nonsimultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Design: Participants included younger (n = 11), middle-age (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions and assessed how well the temporal window fits accounted for these data. Results: The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. Conclusions: This study demonstrated an age-related increase in susceptibility to nonsimultaneous masking, supporting the hypothesis that exacerbated nonsimultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data, suggesting an association between susceptibility to forward masking and speech understanding in modulated noise.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bg7X
via IFTTT

Masking Period Patterns and Forward Masking for Speech-Shaped Noise: Age-Related Effects

imageObjective: The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to nonsimultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Design: Participants included younger (n = 11), middle-age (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions and assessed how well the temporal window fits accounted for these data. Results: The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. Conclusions: This study demonstrated an age-related increase in susceptibility to nonsimultaneous masking, supporting the hypothesis that exacerbated nonsimultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data, suggesting an association between susceptibility to forward masking and speech understanding in modulated noise.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bg7X
via IFTTT

The Effects of Noise and Reverberation on Listening Effort in Adults With Normal Hearing

imageObjectives: The purpose of this study was to investigate the effects of background noise and reverberation on listening effort. Four specific research questions were addressed related to listening effort: (A) With comparable word recognition performance across levels of reverberation, what are the effects of noise and reverberation on listening effort? (B) What is the effect of background noise when reverberation time is constant? (C) What is the effect of increasing reverberation from low to moderate when signal to noise ratio is constant? (D) What is the effect of increasing reverberation from moderate to high when signal to noise ratio is constant? Design: Eighteen young adults (mean age 24.8 years) with normal hearing participated. A dual-task paradigm was used to simultaneously assess word recognition and listening effort. The primary task was monosyllable word recognition, and the secondary task was word categorization (press a button if the word heard was judged to be a noun). Participants were tested in quiet and in background noise in three levels of reverberation (T30

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bg7V
via IFTTT

The Effects of Noise and Reverberation on Listening Effort in Adults With Normal Hearing

imageObjectives: The purpose of this study was to investigate the effects of background noise and reverberation on listening effort. Four specific research questions were addressed related to listening effort: (A) With comparable word recognition performance across levels of reverberation, what are the effects of noise and reverberation on listening effort? (B) What is the effect of background noise when reverberation time is constant? (C) What is the effect of increasing reverberation from low to moderate when signal to noise ratio is constant? (D) What is the effect of increasing reverberation from moderate to high when signal to noise ratio is constant? Design: Eighteen young adults (mean age 24.8 years) with normal hearing participated. A dual-task paradigm was used to simultaneously assess word recognition and listening effort. The primary task was monosyllable word recognition, and the secondary task was word categorization (press a button if the word heard was judged to be a noun). Participants were tested in quiet and in background noise in three levels of reverberation (T30

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bg7V
via IFTTT

Development of Open-Set Word Recognition in Children: Speech-Shaped Noise and Two-Talker Speech Maskers

imageObjective: The goal of this study was to establish the developmental trajectories for children’s open-set recognition of monosyllabic words in each of two maskers: two-talker speech and speech-shaped noise. Design: Listeners were 56 children (5 to 16 years) and 16 adults, all with normal hearing. Thresholds for 50% correct recognition of monosyllabic words were measured in a two-talker speech or a speech-shaped noise masker in the sound field using an open-set task. Target words were presented at a fixed level of 65 dB SPL throughout testing, while the masker level was adapted. A repeated-measures design was used to compare the performance of three age groups of children (5 to 7 years, 8 to 12 years, and 13 to 16 years) and a group of adults. The pattern of age-related changes during childhood was also compared between the two masker conditions. Results: Listeners in all four age groups performed more poorly in the two-talker speech than the speech-shaped noise masker, but the developmental trajectories differed for the two masker conditions. For the speech-shaped noise masker, children’s performance improved with age until about 10 years of age, with little systematic child–adult differences thereafter. In contrast, for the two-talker speech masker, children’s thresholds gradually improved between 5 and 13 years of age, followed by an abrupt improvement in performance to adult-like levels. Children’s thresholds in the two masker conditions were uncorrelated. Conclusions: Younger children require a more advantageous signal-to-noise ratio than older children and adults to achieve 50% correct word recognition in both masker conditions. However, children’s ability to recognize words appears to take longer to mature and follows a different developmental trajectory for the two-talker speech masker than the speech-shaped noise masker. These findings highlight the importance of considering both age and masker type when evaluating children’s masked speech perception abilities.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bg7K
via IFTTT

Development of Open-Set Word Recognition in Children: Speech-Shaped Noise and Two-Talker Speech Maskers

imageObjective: The goal of this study was to establish the developmental trajectories for children’s open-set recognition of monosyllabic words in each of two maskers: two-talker speech and speech-shaped noise. Design: Listeners were 56 children (5 to 16 years) and 16 adults, all with normal hearing. Thresholds for 50% correct recognition of monosyllabic words were measured in a two-talker speech or a speech-shaped noise masker in the sound field using an open-set task. Target words were presented at a fixed level of 65 dB SPL throughout testing, while the masker level was adapted. A repeated-measures design was used to compare the performance of three age groups of children (5 to 7 years, 8 to 12 years, and 13 to 16 years) and a group of adults. The pattern of age-related changes during childhood was also compared between the two masker conditions. Results: Listeners in all four age groups performed more poorly in the two-talker speech than the speech-shaped noise masker, but the developmental trajectories differed for the two masker conditions. For the speech-shaped noise masker, children’s performance improved with age until about 10 years of age, with little systematic child–adult differences thereafter. In contrast, for the two-talker speech masker, children’s thresholds gradually improved between 5 and 13 years of age, followed by an abrupt improvement in performance to adult-like levels. Children’s thresholds in the two masker conditions were uncorrelated. Conclusions: Younger children require a more advantageous signal-to-noise ratio than older children and adults to achieve 50% correct word recognition in both masker conditions. However, children’s ability to recognize words appears to take longer to mature and follows a different developmental trajectory for the two-talker speech masker than the speech-shaped noise masker. These findings highlight the importance of considering both age and masker type when evaluating children’s masked speech perception abilities.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8Bg7K
via IFTTT

Nonmuscle Myosin Heavy Chain IIA Mutation Predicts Severity and Progression of Sensorineural Hearing Loss in Patients With MYH9-Related Disease

imageObjectives: MYH9-related disease (MYH9-RD) is an autosomal- dominant disorder deriving from mutations in MYH9, the gene for the nonmuscle myosin heavy chain (NMMHC)-IIA. MYH9-RD has a complex phenotype including congenital features, such as thrombocytopenia, and noncongenital manifestations, namely sensorineural hearing loss (SNHL), nephropathy, cataract, and liver abnormalities. The disease is caused by a limited number of mutations affecting different regions of the NMMHC-IIA protein. SNHL is the most frequent noncongenital manifestation of MYH9-RD. However, only scarce and anecdotal information is currently available about the clinical and audiometric features of SNHL of MYH9-RD subjects. The objective of this study was to investigate the severity and propensity for progression of SNHL in a large series of MYH9-RD patients in relation to the causative NMMHC-IIA mutations. Design: This study included the consecutive patients diagnosed with MYH9-RD between July 2007 and March 2012 at four participating institutions. A total of 115 audiograms were analyzed from 63 patients belonging to 45 unrelated families with different NMMHC-IIA mutations. Cross-sectional analyses of audiograms were performed. Regression analysis was performed, and age-related typical audiograms (ARTAs) were derived to characterize the type of SNHL associated with different mutations. Results: Severity of SNHL appeared to depend on the specific NMMHC-IIA mutation. Patients carrying substitutions at the residue R702 located in the short functional SH1 helix had the most severe degree of SNHL, whereas patients with the p.E1841K substitution in the coiled-coil region or mutations at the nonhelical tailpiece presented a mild degree of SNHL even at advanced age. The authors also disclosed the effects of different amino acid changes at the same residue: for instance, individuals with the p.R702C mutation had more severe SNHL than those with the p.R702H mutation, and the p.R1165L substitution was associated with a higher degree of hearing loss than the p.R1165C. In general, mild SNHL was associated with a fairly flat audiogram configuration, whereas severe SNHL correlated with downsloping configurations. ARTA plots showed that the most progressive type of SNHL was associated with the p.R702C, the p.R702H, and the p.R1165L substitutions, whereas the p.R1165C mutation correlated with a milder, nonprogressive type of SNHL than the p.R1165L. ARTA for the p.E1841K mutation demonstrated a mild degree of SNHL with only mild progression, whereas the ARTA for the mutations at the nonhelical tailpiece did not show any substantial progression. Conclusions: These data provide useful tools to predict the progression and the expected degree of severity of SNHL in individual MYH9-RD patients, which is especially relevant in young patients. Consequences in clinical practice are important not only for appropriate patient counseling but also for development of customized, genotype-driven clinical management. The authors recently reported that cochlear implantation has a good outcome in MYH9-RD patients; thus, stricter follow-up and earlier intervention are recommended for patients with unfavorable genotypes.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9ezgB
via IFTTT

Nonmuscle Myosin Heavy Chain IIA Mutation Predicts Severity and Progression of Sensorineural Hearing Loss in Patients With MYH9-Related Disease

imageObjectives: MYH9-related disease (MYH9-RD) is an autosomal- dominant disorder deriving from mutations in MYH9, the gene for the nonmuscle myosin heavy chain (NMMHC)-IIA. MYH9-RD has a complex phenotype including congenital features, such as thrombocytopenia, and noncongenital manifestations, namely sensorineural hearing loss (SNHL), nephropathy, cataract, and liver abnormalities. The disease is caused by a limited number of mutations affecting different regions of the NMMHC-IIA protein. SNHL is the most frequent noncongenital manifestation of MYH9-RD. However, only scarce and anecdotal information is currently available about the clinical and audiometric features of SNHL of MYH9-RD subjects. The objective of this study was to investigate the severity and propensity for progression of SNHL in a large series of MYH9-RD patients in relation to the causative NMMHC-IIA mutations. Design: This study included the consecutive patients diagnosed with MYH9-RD between July 2007 and March 2012 at four participating institutions. A total of 115 audiograms were analyzed from 63 patients belonging to 45 unrelated families with different NMMHC-IIA mutations. Cross-sectional analyses of audiograms were performed. Regression analysis was performed, and age-related typical audiograms (ARTAs) were derived to characterize the type of SNHL associated with different mutations. Results: Severity of SNHL appeared to depend on the specific NMMHC-IIA mutation. Patients carrying substitutions at the residue R702 located in the short functional SH1 helix had the most severe degree of SNHL, whereas patients with the p.E1841K substitution in the coiled-coil region or mutations at the nonhelical tailpiece presented a mild degree of SNHL even at advanced age. The authors also disclosed the effects of different amino acid changes at the same residue: for instance, individuals with the p.R702C mutation had more severe SNHL than those with the p.R702H mutation, and the p.R1165L substitution was associated with a higher degree of hearing loss than the p.R1165C. In general, mild SNHL was associated with a fairly flat audiogram configuration, whereas severe SNHL correlated with downsloping configurations. ARTA plots showed that the most progressive type of SNHL was associated with the p.R702C, the p.R702H, and the p.R1165L substitutions, whereas the p.R1165C mutation correlated with a milder, nonprogressive type of SNHL than the p.R1165L. ARTA for the p.E1841K mutation demonstrated a mild degree of SNHL with only mild progression, whereas the ARTA for the mutations at the nonhelical tailpiece did not show any substantial progression. Conclusions: These data provide useful tools to predict the progression and the expected degree of severity of SNHL in individual MYH9-RD patients, which is especially relevant in young patients. Consequences in clinical practice are important not only for appropriate patient counseling but also for development of customized, genotype-driven clinical management. The authors recently reported that cochlear implantation has a good outcome in MYH9-RD patients; thus, stricter follow-up and earlier intervention are recommended for patients with unfavorable genotypes.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9ezgB
via IFTTT

Finite Verb Morphology in the Spontaneous Speech of Dutch-Speaking Children With Hearing Loss

imageObjective: In this study, the acquisition of Dutch finite verb morphology is investigated in children with cochlear implants (CIs) with profound hearing loss and in children with hearing aids (HAs) with moderate to severe hearing loss. Comparing these two groups of children increases our insight into how hearing experience and audibility affect the acquisition of morphosyntax. Design: Spontaneous speech samples were analyzed of 48 children with CIs and 29 children with HAs, ages 4 to 7 years. These language samples were analyzed by means of standardized language analysis involving mean length of utterance, the number of finite verbs produced, and target-like subject–verb agreement. The outcomes were interpreted relative to expectations based on the performance of typically developing peers with normal hearing. Outcomes of all measures were correlated with hearing level in the group of HA users and age at implantation in the group of CI users. Results: For both groups, the number of finite verbs that were produced in 50-utterance sample was on par with mean length of utterance and at the lower bound of the normal distribution. No significant differences were found between children with CIs and HAs on any of the measures under investigation. Yet, both groups produced more subject–verb agreement errors than are to be expected for typically developing hearing peers. No significant correlation was found between the hearing level of the children and the relevant measures of verb morphology, both with respect to the overall number of verbs that were used and the number of errors that children made. Within the group of CI users, the outcomes were significantly correlated with age at implantation. Conclusion: When producing finite verb morphology, profoundly deaf children wearing CIs perform similarly to their peers with moderate-to-severe hearing loss wearing HAs. Hearing loss negatively affects the acquisition of subject–verb agreement regardless of the hearing device (CI or HA) that the child is wearing. The results are of importance for speech-language pathologists who are working with children with a hearing impairment indicating the need to focus on subject–verb agreement in speech-language therapy.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eARy
via IFTTT

Finite Verb Morphology in the Spontaneous Speech of Dutch-Speaking Children With Hearing Loss

imageObjective: In this study, the acquisition of Dutch finite verb morphology is investigated in children with cochlear implants (CIs) with profound hearing loss and in children with hearing aids (HAs) with moderate to severe hearing loss. Comparing these two groups of children increases our insight into how hearing experience and audibility affect the acquisition of morphosyntax. Design: Spontaneous speech samples were analyzed of 48 children with CIs and 29 children with HAs, ages 4 to 7 years. These language samples were analyzed by means of standardized language analysis involving mean length of utterance, the number of finite verbs produced, and target-like subject–verb agreement. The outcomes were interpreted relative to expectations based on the performance of typically developing peers with normal hearing. Outcomes of all measures were correlated with hearing level in the group of HA users and age at implantation in the group of CI users. Results: For both groups, the number of finite verbs that were produced in 50-utterance sample was on par with mean length of utterance and at the lower bound of the normal distribution. No significant differences were found between children with CIs and HAs on any of the measures under investigation. Yet, both groups produced more subject–verb agreement errors than are to be expected for typically developing hearing peers. No significant correlation was found between the hearing level of the children and the relevant measures of verb morphology, both with respect to the overall number of verbs that were used and the number of errors that children made. Within the group of CI users, the outcomes were significantly correlated with age at implantation. Conclusion: When producing finite verb morphology, profoundly deaf children wearing CIs perform similarly to their peers with moderate-to-severe hearing loss wearing HAs. Hearing loss negatively affects the acquisition of subject–verb agreement regardless of the hearing device (CI or HA) that the child is wearing. The results are of importance for speech-language pathologists who are working with children with a hearing impairment indicating the need to focus on subject–verb agreement in speech-language therapy.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eARy
via IFTTT

Subjective Ratings of Fatigue and Vigor in Adults With Hearing Loss Are Driven by Perceived Hearing Difficulties Not Degree of Hearing Loss

imageObjectives: Anecdotal reports and qualitative research suggests that fatigue is a common, but often overlooked, accompaniment of hearing loss which negatively affects quality of life. However, systematic research examining the relationship between hearing loss and fatigue is limited. In this study, the authors examined relationships between hearing loss and various domains of fatigue and vigor using standardized and validated measures. Relationships between subjective ratings of multidimensional fatigue and vigor and the social and emotional consequences of hearing loss were also explored. Design: Subjective ratings of fatigue and vigor were assessed using the profile of mood states and the multidimensional fatigue symptom inventory-short form. To assess the social and emotional impact of hearing loss participants also completed, depending on their age, the hearing handicap inventory for the elderly or adults. Responses were obtained from 149 adults (mean age = 66.1 years, range 22 to 94 years), who had scheduled a hearing test and/or a hearing aid selection at the Vanderbilt Bill Wilkerson Center Audiology clinic. These data were used to explore relationships between audiometric and demographic (i.e., age and gender) factors, fatigue, and hearing handicap scores. Results: Compared with normative data, adults seeking help for their hearing difficulties in this study reported significantly less vigor and more fatigue. Reports of severe vigor/fatigue problems (ratings exceeding normative means by ±1.5 standard deviations) were also increased in the study sample compared with that of normative data. Regression analyses, with adjustments for age and gender, revealed that the subjective percepts of fatigue, regardless of domain, and vigor were not strongly associated with degree of hearing loss. However, similar analyses controlling for age, gender, and degree of hearing loss showed a strong association between measures of fatigue and vigor (multidimensional fatigue symptom inventory-short form scores) and the social and emotional consequences of hearing loss (hearing handicap inventory for the elderly/adults scores). Conclusions: Adults seeking help for hearing difficulties are more likely to experience severe fatigue and vigor problems; surprisingly, this increased risk appears unrelated to degree of hearing loss. However, the negative psychosocial consequences of hearing loss are strongly associated with subjective ratings of fatigue, across all domains, and vigor. Additional research is needed to define the pathogenesis of hearing loss-related fatigue and to identify factors that may modulate and mediate (e.g., hearing aid or cochlear implant use) its impact.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eABe
via IFTTT

Subjective Ratings of Fatigue and Vigor in Adults With Hearing Loss Are Driven by Perceived Hearing Difficulties Not Degree of Hearing Loss

imageObjectives: Anecdotal reports and qualitative research suggests that fatigue is a common, but often overlooked, accompaniment of hearing loss which negatively affects quality of life. However, systematic research examining the relationship between hearing loss and fatigue is limited. In this study, the authors examined relationships between hearing loss and various domains of fatigue and vigor using standardized and validated measures. Relationships between subjective ratings of multidimensional fatigue and vigor and the social and emotional consequences of hearing loss were also explored. Design: Subjective ratings of fatigue and vigor were assessed using the profile of mood states and the multidimensional fatigue symptom inventory-short form. To assess the social and emotional impact of hearing loss participants also completed, depending on their age, the hearing handicap inventory for the elderly or adults. Responses were obtained from 149 adults (mean age = 66.1 years, range 22 to 94 years), who had scheduled a hearing test and/or a hearing aid selection at the Vanderbilt Bill Wilkerson Center Audiology clinic. These data were used to explore relationships between audiometric and demographic (i.e., age and gender) factors, fatigue, and hearing handicap scores. Results: Compared with normative data, adults seeking help for their hearing difficulties in this study reported significantly less vigor and more fatigue. Reports of severe vigor/fatigue problems (ratings exceeding normative means by ±1.5 standard deviations) were also increased in the study sample compared with that of normative data. Regression analyses, with adjustments for age and gender, revealed that the subjective percepts of fatigue, regardless of domain, and vigor were not strongly associated with degree of hearing loss. However, similar analyses controlling for age, gender, and degree of hearing loss showed a strong association between measures of fatigue and vigor (multidimensional fatigue symptom inventory-short form scores) and the social and emotional consequences of hearing loss (hearing handicap inventory for the elderly/adults scores). Conclusions: Adults seeking help for hearing difficulties are more likely to experience severe fatigue and vigor problems; surprisingly, this increased risk appears unrelated to degree of hearing loss. However, the negative psychosocial consequences of hearing loss are strongly associated with subjective ratings of fatigue, across all domains, and vigor. Additional research is needed to define the pathogenesis of hearing loss-related fatigue and to identify factors that may modulate and mediate (e.g., hearing aid or cochlear implant use) its impact.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eABe
via IFTTT

Predicting Speech-in-Noise Recognition From Performance on the Trail Making Test: Results From a Large-Scale Internet Study

imageObjective: The aim of the study was to investigate the utility of an internet-based version of the trail making test (TMT) to predict performance on a speech-in-noise perception task. Design: Data were taken from a sample of 1509 listeners between ages 18 and 91 years old. Participants completed computerized versions of the TMT and an adaptive speech-in-noise recognition test. All testing was conducted via the internet. Results: The results indicate that better performance on both the simple and complex subtests of the TMT are associated with better speech-in-noise recognition scores. Thirty-eight percent of the participants had scores on the speech-in-noise test that indicated the presence of a hearing loss. Conclusions: The findings suggest that the TMT may be a useful tool in the assessment, and possibly the treatment, of speech-recognition difficulties. The results indicate that the relation between speech-in-noise recognition and TMT performance relates both to the capacity of the TMT to index processing speed and to the more complex cognitive abilities also implicated in TMT performance.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9ez0a
via IFTTT

Predicting Speech-in-Noise Recognition From Performance on the Trail Making Test: Results From a Large-Scale Internet Study

imageObjective: The aim of the study was to investigate the utility of an internet-based version of the trail making test (TMT) to predict performance on a speech-in-noise perception task. Design: Data were taken from a sample of 1509 listeners between ages 18 and 91 years old. Participants completed computerized versions of the TMT and an adaptive speech-in-noise recognition test. All testing was conducted via the internet. Results: The results indicate that better performance on both the simple and complex subtests of the TMT are associated with better speech-in-noise recognition scores. Thirty-eight percent of the participants had scores on the speech-in-noise test that indicated the presence of a hearing loss. Conclusions: The findings suggest that the TMT may be a useful tool in the assessment, and possibly the treatment, of speech-recognition difficulties. The results indicate that the relation between speech-in-noise recognition and TMT performance relates both to the capacity of the TMT to index processing speed and to the more complex cognitive abilities also implicated in TMT performance.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9ez0a
via IFTTT

Shifting Fundamental Frequency in Simulated Electric-Acoustic Listening: Effects of F0 Variation

imageObjective: Shifting the mean fundamental frequency (F0) of target speech down in frequency may be a way to provide the benefits of electric-acoustic stimulation (EAS) to cochlear implant (CI) users whose limited residual hearing precludes a benefit typically, even with amplification. However, previous study showed a decline in the amount of benefit at the greatest downward frequency shifts, and the authors hypothesized that this might be related to F0 variation. Thus, in the present study, the authors sought to determine the relationship between mean F0, F0 variation, and the benefits of combining electric stimulation from a CI with low-frequency residual acoustic hearing. Design: The authors measured speech intelligibility in normal-hearing listeners using an EAS simulation consisting of a sine vocoder combined either with speech low-pass filtered at 500 Hz, or with a pure tone representing target F0. The authors used extracted target voice pitch information to modulate the tone, and manipulated both the frequency of the carrier (mean F0), as well as the standard deviation of the voice pitch information (F0 variation). Results: A decline in EAS benefit was observed at the lowest mean F0 tested, but this decline disappeared when F0 variation was reduced to be proportional to the amount of the shift in frequency (i.e., when F0 was shifted logarithmically instead of linearly). Conclusion: Lowering mean F0 by shifting the frequency of a pure tone carrying target voice pitch information can provide as much EAS benefit as an unshifted tone, at least in the current simulation of EAS. These results may have implications for CI users with extremely limited residual acoustic hearing.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BewT
via IFTTT

Shifting Fundamental Frequency in Simulated Electric-Acoustic Listening: Effects of F0 Variation

imageObjective: Shifting the mean fundamental frequency (F0) of target speech down in frequency may be a way to provide the benefits of electric-acoustic stimulation (EAS) to cochlear implant (CI) users whose limited residual hearing precludes a benefit typically, even with amplification. However, previous study showed a decline in the amount of benefit at the greatest downward frequency shifts, and the authors hypothesized that this might be related to F0 variation. Thus, in the present study, the authors sought to determine the relationship between mean F0, F0 variation, and the benefits of combining electric stimulation from a CI with low-frequency residual acoustic hearing. Design: The authors measured speech intelligibility in normal-hearing listeners using an EAS simulation consisting of a sine vocoder combined either with speech low-pass filtered at 500 Hz, or with a pure tone representing target F0. The authors used extracted target voice pitch information to modulate the tone, and manipulated both the frequency of the carrier (mean F0), as well as the standard deviation of the voice pitch information (F0 variation). Results: A decline in EAS benefit was observed at the lowest mean F0 tested, but this decline disappeared when F0 variation was reduced to be proportional to the amount of the shift in frequency (i.e., when F0 was shifted logarithmically instead of linearly). Conclusion: Lowering mean F0 by shifting the frequency of a pure tone carrying target voice pitch information can provide as much EAS benefit as an unshifted tone, at least in the current simulation of EAS. These results may have implications for CI users with extremely limited residual acoustic hearing.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BewT
via IFTTT

The Norwegian Hearing in Noise Test for Children

imageObjectives: The aims of this study were to create 12 ten-sentence lists for the Norwegian Hearing in Noise Test for children, and to use these lists to collect speech reception thresholds (SRTs) in quiet and in noise to assess speech perception in normal hearing children 5 to 13 years of age, to establish developmental trends, and to compare the results with those of adults. Data were collected in an anechoic chamber and in an audiometric test room, and the effect of slight room reverberation was estimated. Design: The Norwegian Hearing in Noise Test for children was formed from a subset of the adult sentences. Selected sentences were repeatable by 5- and 6-year-old children in quiet listening conditions. Twelve sentence lists were created based on the sentences’ phoneme distributions. Six-year-olds were tested with these lists to determine list equivalence. Slopes of performance intensity (PI) functions relating mean word scores and signal to noise ratios (SNRs) were estimated for a group of 7-year-olds and adults. HINT normative data were collected for 219 adults and children 5 to 13 years of age in anechoic and audiometric test rooms, using noise levels 55, 60, or 65 dBA. Target sentences always originated from the front; whereas, the noise was presented either from the front, noise front (NF), from the right, noise right (NR) or from the left, noise left (NL). The NR and NL scores were averaged to yield a noise side (NS) score. All 219 subjects were tested in the NF condition, and 95 in the NR and NL conditions. Retest of the NF at the end of the test session was done for 53 subjects. Longitudinal data were collected by testing 9 children as 6, 8, and 13 years old. Results: NF and NS group means for adults were −3.7 and −11.8 dB SNR, respectively. Group means for 13-year-olds were −3.3 and −9.7, and for the 6-year-olds group means were −0.3 and −5.7 dB SNR, as measured in an anechoic chamber. NF SRTs measured in an audiometric test room were 0.7 to 1.5 higher (poorer) than in the anechoic chamber. Developmental trends were comparable in both rooms. PI slopes were 8.0% dB SNR for the 7-year-olds and 10.1% for the adults. NF SRTs in the anechoic chamber improved by 0.7 dB per year over an age range of 5 to 10 years. Using a PI slope 8 to 10% per dB, the estimated increase in percent intelligibility was 4 to 7% per year. Adult SRTs were about 3 dB lower than those for 6-year-olds, corresponding to 25 to 30% better intelligibility for adults. Conclusions: Developmental trends in HINT performance for Norwegian children with normal hearing are similar to those seen in other languages, including American English and Canadian French. SRTs approach adult normative values by the age of 13; however, the benefits of spatial separation of the speech and noise sources are less than those seen for adults.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BcFg
via IFTTT

The Norwegian Hearing in Noise Test for Children

imageObjectives: The aims of this study were to create 12 ten-sentence lists for the Norwegian Hearing in Noise Test for children, and to use these lists to collect speech reception thresholds (SRTs) in quiet and in noise to assess speech perception in normal hearing children 5 to 13 years of age, to establish developmental trends, and to compare the results with those of adults. Data were collected in an anechoic chamber and in an audiometric test room, and the effect of slight room reverberation was estimated. Design: The Norwegian Hearing in Noise Test for children was formed from a subset of the adult sentences. Selected sentences were repeatable by 5- and 6-year-old children in quiet listening conditions. Twelve sentence lists were created based on the sentences’ phoneme distributions. Six-year-olds were tested with these lists to determine list equivalence. Slopes of performance intensity (PI) functions relating mean word scores and signal to noise ratios (SNRs) were estimated for a group of 7-year-olds and adults. HINT normative data were collected for 219 adults and children 5 to 13 years of age in anechoic and audiometric test rooms, using noise levels 55, 60, or 65 dBA. Target sentences always originated from the front; whereas, the noise was presented either from the front, noise front (NF), from the right, noise right (NR) or from the left, noise left (NL). The NR and NL scores were averaged to yield a noise side (NS) score. All 219 subjects were tested in the NF condition, and 95 in the NR and NL conditions. Retest of the NF at the end of the test session was done for 53 subjects. Longitudinal data were collected by testing 9 children as 6, 8, and 13 years old. Results: NF and NS group means for adults were −3.7 and −11.8 dB SNR, respectively. Group means for 13-year-olds were −3.3 and −9.7, and for the 6-year-olds group means were −0.3 and −5.7 dB SNR, as measured in an anechoic chamber. NF SRTs measured in an audiometric test room were 0.7 to 1.5 higher (poorer) than in the anechoic chamber. Developmental trends were comparable in both rooms. PI slopes were 8.0% dB SNR for the 7-year-olds and 10.1% for the adults. NF SRTs in the anechoic chamber improved by 0.7 dB per year over an age range of 5 to 10 years. Using a PI slope 8 to 10% per dB, the estimated increase in percent intelligibility was 4 to 7% per year. Adult SRTs were about 3 dB lower than those for 6-year-olds, corresponding to 25 to 30% better intelligibility for adults. Conclusions: Developmental trends in HINT performance for Norwegian children with normal hearing are similar to those seen in other languages, including American English and Canadian French. SRTs approach adult normative values by the age of 13; however, the benefits of spatial separation of the speech and noise sources are less than those seen for adults.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BcFg
via IFTTT

The Effect of Residual Acoustic Hearing and Adaptation to Uncertainty on Speech Perception in Cochlear Implant Users: Evidence From Eye-Tracking

imageObjectives: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/∫, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. Design: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/∫ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ∫-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. Results: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked “b” (for example), and accounting for variation in the category boundary, participants made more looks to the competitor (“p”) as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. Conclusion: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BewF
via IFTTT

The Effect of Residual Acoustic Hearing and Adaptation to Uncertainty on Speech Perception in Cochlear Implant Users: Evidence From Eye-Tracking

imageObjectives: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/∫, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. Design: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/∫ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ∫-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. Results: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked “b” (for example), and accounting for variation in the category boundary, participants made more looks to the competitor (“p”) as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. Conclusion: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m8BewF
via IFTTT

Everyday Listening Performance of Children Before and After Receiving a Second Cochlear Implant: Results Using the Parent Version of the Speech, Spatial, and Qualities of Hearing Scale

imageObjectives: To evaluate change in individual children’s performance in general areas of everyday listening following sequential bilateral implantation, and to identify the specific types of listening scenarios in which performance change occurred. The first hypothesis was that parent performance ratings for their child would be higher in the bilateral versus unilateral implant condition for each section of the speech, spatial and qualities of hearing scale for parents, viz.: speech perception, spatial hearing, and qualities of hearing. The second hypothesis was that the rating for the participant group would be higher in the bilateral condition for speech perception items involving group conversation or background noise, spatial hearing items, and qualities of hearing items focused on sound segregation or listening effort. Design: Children receiving sequential bilateral implants at the Royal Victorian Eye and Ear Hospital and fulfilling selection criteria (primarily no significant cognitive or developmental delays, and oral English language skills of child and parent sufficient for completing assessments) were invited to participate in a wider project evaluating outcomes. The assessment protocol for older children included the speech, spatial, and qualities of hearing scale for parents. All children (n = 20; ages 4 to 15 years) whose parents completed the scale preoperatively and at 24-months postoperatively were included in this study. Ratings obtained preoperatively in the unilateral implant condition (or unilateral implant plus hearing aid for 4 participants) were compared with those obtained postoperatively in the bilateral implant condition. Results: Bilateral ratings were significantly higher than unilateral ratings on the speech section for 12 children (W ≥ 7.0; p ≤ 0.03), on the spatial section for 13 children (W ≥ 15.0; p ≤ 0.03), and on the qualities of hearing section for 9 children (W ≥ 15.0; p ≤ 0.047). The difference between conditions was unrelated to time between implants or age at bilateral implantation (r ≤ 0.4; p ≥ 0.082). The median bilateral ratings for the participant group were higher for all eight speech perception items, including, as predicted, those involving group conversation and/or background noise (W ≥ 37.5; p ≤ 0.043). Also, as predicted, the median bilateral ratings for the participant group were higher for all six spatial hearing items (W ≥ 88.0; p ≤ 0.014), and for qualities of hearing items related to sound segregation (W ≥ 94.0; p ≤ 0.029), but not for those related to listening effort (W ≤ 92.0; p ≥ 0.112). Conclusions: Seventy-five percentage of parents perceived change in their child’s daily listening performance postoperatively, and 25% perceived change across all three listening areas. For the overall participant group, the parents perceived a change in performance in the majority of specific listening scenarios, although change was limited in the qualities of hearing section, including no change in listening effort. Previous research suggests postoperative change was likely due to the headshadow effect and improved spatial hearing. Additional contributions may have been made by binaural summation, redundancy, and unmasking. For these participants, differences between device conditions may have been limited by their relatively old age at implantation, delay between implants, and limited bilateral experience. These results will provide valuable information to families during preoperative counseling and postoperative discussions about expected progress and evident benefit.

from #Audiology via ola Kala on Inoreader http://ift.tt/1JcQBte
via IFTTT

Everyday Listening Performance of Children Before and After Receiving a Second Cochlear Implant: Results Using the Parent Version of the Speech, Spatial, and Qualities of Hearing Scale

imageObjectives: To evaluate change in individual children’s performance in general areas of everyday listening following sequential bilateral implantation, and to identify the specific types of listening scenarios in which performance change occurred. The first hypothesis was that parent performance ratings for their child would be higher in the bilateral versus unilateral implant condition for each section of the speech, spatial and qualities of hearing scale for parents, viz.: speech perception, spatial hearing, and qualities of hearing. The second hypothesis was that the rating for the participant group would be higher in the bilateral condition for speech perception items involving group conversation or background noise, spatial hearing items, and qualities of hearing items focused on sound segregation or listening effort. Design: Children receiving sequential bilateral implants at the Royal Victorian Eye and Ear Hospital and fulfilling selection criteria (primarily no significant cognitive or developmental delays, and oral English language skills of child and parent sufficient for completing assessments) were invited to participate in a wider project evaluating outcomes. The assessment protocol for older children included the speech, spatial, and qualities of hearing scale for parents. All children (n = 20; ages 4 to 15 years) whose parents completed the scale preoperatively and at 24-months postoperatively were included in this study. Ratings obtained preoperatively in the unilateral implant condition (or unilateral implant plus hearing aid for 4 participants) were compared with those obtained postoperatively in the bilateral implant condition. Results: Bilateral ratings were significantly higher than unilateral ratings on the speech section for 12 children (W ≥ 7.0; p ≤ 0.03), on the spatial section for 13 children (W ≥ 15.0; p ≤ 0.03), and on the qualities of hearing section for 9 children (W ≥ 15.0; p ≤ 0.047). The difference between conditions was unrelated to time between implants or age at bilateral implantation (r ≤ 0.4; p ≥ 0.082). The median bilateral ratings for the participant group were higher for all eight speech perception items, including, as predicted, those involving group conversation and/or background noise (W ≥ 37.5; p ≤ 0.043). Also, as predicted, the median bilateral ratings for the participant group were higher for all six spatial hearing items (W ≥ 88.0; p ≤ 0.014), and for qualities of hearing items related to sound segregation (W ≥ 94.0; p ≤ 0.029), but not for those related to listening effort (W ≤ 92.0; p ≥ 0.112). Conclusions: Seventy-five percentage of parents perceived change in their child’s daily listening performance postoperatively, and 25% perceived change across all three listening areas. For the overall participant group, the parents perceived a change in performance in the majority of specific listening scenarios, although change was limited in the qualities of hearing section, including no change in listening effort. Previous research suggests postoperative change was likely due to the headshadow effect and improved spatial hearing. Additional contributions may have been made by binaural summation, redundancy, and unmasking. For these participants, differences between device conditions may have been limited by their relatively old age at implantation, delay between implants, and limited bilateral experience. These results will provide valuable information to families during preoperative counseling and postoperative discussions about expected progress and evident benefit.

from #Audiology via ola Kala on Inoreader http://ift.tt/1JcQBte
via IFTTT

Statistical Learning, Syllable Processing, and Speech Production in Healthy Hearing and Hearing-Impaired Preschool Children: A Mismatch Negativity Study

imageObjectives: The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Design: Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Results: Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. Conclusions: In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eysW
via IFTTT

Statistical Learning, Syllable Processing, and Speech Production in Healthy Hearing and Hearing-Impaired Preschool Children: A Mismatch Negativity Study

imageObjectives: The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Design: Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Results: Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. Conclusions: In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.

from #Audiology via ola Kala on Inoreader http://ift.tt/1O9eysW
via IFTTT

Study: Emotion processing in the brain changes with tinnitus severity

Tinnitus, otherwise known as ringing in the ears, affects nearly one-third of adults over age 65. The condition can develop as part of age-related hearing loss or from a traumatic injury.

from #Audiology via ola Kala on Inoreader http://ift.tt/1mlAWmx
via IFTTT

Study: Emotion processing in the brain changes with tinnitus severity

Tinnitus, otherwise known as ringing in the ears, affects nearly one-third of adults over age 65. The condition can develop as part of age-related hearing loss or from a traumatic injury.

from #Audiology via ola Kala on Inoreader http://ift.tt/1mlAWmx
via IFTTT