Πέμπτη 2 Αυγούστου 2018

Mandarin-Speaking, Kindergarten-Aged Children With Cochlear Implants Benefit From Natural F 0 Patterns in the Use of Semantic Context During Speech Recognition

Purpose
The purpose of this study was to investigate the extent to which semantic context and F 0 contours affect speech recognition by Mandarin-speaking, kindergarten-aged children with cochlear implants (CIs).
Method
The experimental design manipulated two factors, that is, semantic context, by comparing the intelligibility of normal sentence versus word list, and F 0 contours, by comparing the intelligibility of utterances with natural versus flat F 0 patterns. Twenty-two children with CIs completed a speech recognition test.
Results
Children with CIs could use both semantic context and F 0 contours to assist speech recognition. Furthermore, natural F 0 patterns provided extra benefit when semantic context was present than when it was absent.
Conclusion
Dynamic F 0 contours play an important role in speech recognition by Mandarin-speaking children with CIs despite the well-known limitation of CI devices in extracting F 0 information.

from #Audiology via ola Kala on Inoreader https://ift.tt/2v9zbzE
via IFTTT

The Effect of e-Book Vocabulary Instruction on Spanish–English Speaking Children

Purpose
This study aimed to examine the effect of an intensive vocabulary intervention embedded in e-books on the vocabulary skills of young Spanish–English speaking English learners (ELs) from low–socioeconomic status backgrounds.
Method
Children (N = 288) in kindergarten and 1st grade were randomly assigned to treatment and read-only conditions. All children received e-book readings approximately 3 times a week for 10–20 weeks using the same books. Children in the treatment condition received e-books supplemented with vocabulary instruction that included scaffolding through explanations in Spanish, repetition in English, checks for understanding, and highlighted morphology.
Results
There was a main effect of the intervention on expressive labeling (g = 0.38) and vocabulary on the Peabody Picture Vocabulary Test–Fourth Edition (g = 0.14; Dunn & Dunn, 2007), with no significant moderation effect of initial Peabody Picture Vocabulary Test score. There was no significant difference between conditions on children's expressive definitions.
Conclusion
Findings substantiate the effectiveness of computer-implemented embedded vocabulary intervention for increasing ELs' vocabulary knowledge.
Implications
Computer-assisted vocabulary instruction with scaffolding through Spanish explanations, repetitions, and highlighted morphology is a promising approach to facilitate word learning for ELs in kindergarten and 1st grade.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MfVMBs
via IFTTT

Reliability and Repeatability of the Speech Cue Profile

Purpose
Researchers have long noted speech recognition variability that is not explained by the pure-tone audiogram. Previous work (Souza, Wright, Blackburn, Tatman, & Gallun, 2015) demonstrated that a small number of listeners with sensorineural hearing loss utilized different types of acoustic cues to identify speechlike stimuli, specifically the extent to which the participant relied upon spectral (or temporal) information for identification. Consistent with recent calls for data rigor and reproducibility, the primary aims of this study were to replicate the pattern of cue use in a larger cohort and to verify stability of the cue profiles over time.
Method
Cue-use profiles were measured for adults with sensorineural hearing loss using a syllable identification task consisting of synthetic speechlike stimuli in which spectral and temporal dimensions were manipulated along continua. For the first set, a static spectral shape varied from alveolar to palatal, and a temporal envelope rise time varied from affricate to fricative. For the second set, formant transitions varied from labial to alveolar and a temporal envelope rise time varied from approximant to stop. A discriminant feature analysis was used to determine to what degree spectral and temporal information contributed to stimulus identification. A subset of participants completed a 2nd visit using the same stimuli and procedures.
Results
When spectral information was static, most participants were more influenced by spectral than by temporal information. When spectral information was dynamic, participants demonstrated a balanced distribution of cue-use patterns, with nearly equal numbers of individuals influenced by spectral or temporal cues. Individual cue profile was repeatable over a period of several months.
Conclusion
In combination with previously published data, these results indicate that listeners with sensorineural hearing loss are influenced by different cues to identify speechlike sounds and that those patterns are stable over time.

from #Audiology via ola Kala on Inoreader https://ift.tt/2vbSUPo
via IFTTT

Predicting Language Difficulties in Middle Childhood From Early Developmental Milestones: A Comparison of Traditional Regression and Machine Learning Techniques

Purpose
The current study aimed to compare traditional logistic regression models with machine learning algorithms to investigate the predictive ability of (a) communication performance at 3 years old on language outcomes at 10 years old and (b) broader developmental skills (motor, social, and adaptive) at 3 years old on language outcomes at 10 years old.
Method
Participants (N = 1,322) were drawn from the Western Australian Pregnancy Cohort (Raine) Study (Straker et al., 2017). A general developmental screener, the Infant Monitoring Questionnaire (Squires, Bricker, & Potter, 1990), was completed by caregivers at the 3-year follow-up. Language ability at 10 years old was assessed using the Clinical Evaluation of Language Fundamentals–Third Edition (Semel, Wiig, & Secord, 1995). Logistic regression models and interpretable machine learning algorithms were used to assess predictive abilities of early developmental milestones for later language outcomes.
Results
Overall, the findings showed that prediction accuracies were comparable between logistic regression and machine learning models using communication-only performance as well as performance on communication and broader developmental domains to predict language performance at 10 years old. Decision trees are incorporated to visually present these findings but must be interpreted with caution because of the poor accuracy of the models overall.
Conclusions
The current study provides preliminary evidence that machine learning algorithms provide equivalent predictive accuracy to traditional methods. Furthermore, the inclusion of broader developmental skills did not improve predictive capability. Assessment of language at more than 1 time point is necessary to ensure children whose language delays emerge later are identified and supported.
Supplemental Material
https://doi.org/10.23641/asha.6879719

from #Audiology via ola Kala on Inoreader https://ift.tt/2McWmjk
via IFTTT

Positive Social Interaction and Hearing Loss in Older Adults Living in Rural and Urban Communities

Purpose
This study explored the extent to which hearing loss affected positive social interactions in older adults living in rural and urban communities.
Method
Pure-tone behavioral hearing assessments were administered to 80 adults 60 years of age or older. In addition, all participants completed 2 questionnaires, the Medical Outcomes Study Social Support Survey (Sherbourne & Stewart, 1991) and the Patient Health Questionnaire–Ninth Edition (Kroenke, Spitzer, & Williams, 2001).
Results
The preliminary findings suggested that adults with hearing loss living in rural towns had poorer positive social interactions compared with their urban counterparts with hearing loss. Also, adults with hearing loss living in rural towns had more symptoms of depression than adults with normal hearing who lived in these same geographical regions.
Conclusions
These preliminary findings could indicate that older adults with hearing loss living in rural communities will face more isolation than adults with hearing loss living in urban settings. Increasing our understanding of the extent of social isolation in adults with hearing loss living in rural and urban populations will be necessary.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OByw29
via IFTTT

Mandarin-Speaking, Kindergarten-Aged Children With Cochlear Implants Benefit From Natural F 0 Patterns in the Use of Semantic Context During Speech Recognition

Purpose
The purpose of this study was to investigate the extent to which semantic context and F 0 contours affect speech recognition by Mandarin-speaking, kindergarten-aged children with cochlear implants (CIs).
Method
The experimental design manipulated two factors, that is, semantic context, by comparing the intelligibility of normal sentence versus word list, and F 0 contours, by comparing the intelligibility of utterances with natural versus flat F 0 patterns. Twenty-two children with CIs completed a speech recognition test.
Results
Children with CIs could use both semantic context and F 0 contours to assist speech recognition. Furthermore, natural F 0 patterns provided extra benefit when semantic context was present than when it was absent.
Conclusion
Dynamic F 0 contours play an important role in speech recognition by Mandarin-speaking children with CIs despite the well-known limitation of CI devices in extracting F 0 information.

from #Audiology via ola Kala on Inoreader https://ift.tt/2v9zbzE
via IFTTT

The Effect of e-Book Vocabulary Instruction on Spanish–English Speaking Children

Purpose
This study aimed to examine the effect of an intensive vocabulary intervention embedded in e-books on the vocabulary skills of young Spanish–English speaking English learners (ELs) from low–socioeconomic status backgrounds.
Method
Children (N = 288) in kindergarten and 1st grade were randomly assigned to treatment and read-only conditions. All children received e-book readings approximately 3 times a week for 10–20 weeks using the same books. Children in the treatment condition received e-books supplemented with vocabulary instruction that included scaffolding through explanations in Spanish, repetition in English, checks for understanding, and highlighted morphology.
Results
There was a main effect of the intervention on expressive labeling (g = 0.38) and vocabulary on the Peabody Picture Vocabulary Test–Fourth Edition (g = 0.14; Dunn & Dunn, 2007), with no significant moderation effect of initial Peabody Picture Vocabulary Test score. There was no significant difference between conditions on children's expressive definitions.
Conclusion
Findings substantiate the effectiveness of computer-implemented embedded vocabulary intervention for increasing ELs' vocabulary knowledge.
Implications
Computer-assisted vocabulary instruction with scaffolding through Spanish explanations, repetitions, and highlighted morphology is a promising approach to facilitate word learning for ELs in kindergarten and 1st grade.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MfVMBs
via IFTTT

Reliability and Repeatability of the Speech Cue Profile

Purpose
Researchers have long noted speech recognition variability that is not explained by the pure-tone audiogram. Previous work (Souza, Wright, Blackburn, Tatman, & Gallun, 2015) demonstrated that a small number of listeners with sensorineural hearing loss utilized different types of acoustic cues to identify speechlike stimuli, specifically the extent to which the participant relied upon spectral (or temporal) information for identification. Consistent with recent calls for data rigor and reproducibility, the primary aims of this study were to replicate the pattern of cue use in a larger cohort and to verify stability of the cue profiles over time.
Method
Cue-use profiles were measured for adults with sensorineural hearing loss using a syllable identification task consisting of synthetic speechlike stimuli in which spectral and temporal dimensions were manipulated along continua. For the first set, a static spectral shape varied from alveolar to palatal, and a temporal envelope rise time varied from affricate to fricative. For the second set, formant transitions varied from labial to alveolar and a temporal envelope rise time varied from approximant to stop. A discriminant feature analysis was used to determine to what degree spectral and temporal information contributed to stimulus identification. A subset of participants completed a 2nd visit using the same stimuli and procedures.
Results
When spectral information was static, most participants were more influenced by spectral than by temporal information. When spectral information was dynamic, participants demonstrated a balanced distribution of cue-use patterns, with nearly equal numbers of individuals influenced by spectral or temporal cues. Individual cue profile was repeatable over a period of several months.
Conclusion
In combination with previously published data, these results indicate that listeners with sensorineural hearing loss are influenced by different cues to identify speechlike sounds and that those patterns are stable over time.

from #Audiology via ola Kala on Inoreader https://ift.tt/2vbSUPo
via IFTTT

Predicting Language Difficulties in Middle Childhood From Early Developmental Milestones: A Comparison of Traditional Regression and Machine Learning Techniques

Purpose
The current study aimed to compare traditional logistic regression models with machine learning algorithms to investigate the predictive ability of (a) communication performance at 3 years old on language outcomes at 10 years old and (b) broader developmental skills (motor, social, and adaptive) at 3 years old on language outcomes at 10 years old.
Method
Participants (N = 1,322) were drawn from the Western Australian Pregnancy Cohort (Raine) Study (Straker et al., 2017). A general developmental screener, the Infant Monitoring Questionnaire (Squires, Bricker, & Potter, 1990), was completed by caregivers at the 3-year follow-up. Language ability at 10 years old was assessed using the Clinical Evaluation of Language Fundamentals–Third Edition (Semel, Wiig, & Secord, 1995). Logistic regression models and interpretable machine learning algorithms were used to assess predictive abilities of early developmental milestones for later language outcomes.
Results
Overall, the findings showed that prediction accuracies were comparable between logistic regression and machine learning models using communication-only performance as well as performance on communication and broader developmental domains to predict language performance at 10 years old. Decision trees are incorporated to visually present these findings but must be interpreted with caution because of the poor accuracy of the models overall.
Conclusions
The current study provides preliminary evidence that machine learning algorithms provide equivalent predictive accuracy to traditional methods. Furthermore, the inclusion of broader developmental skills did not improve predictive capability. Assessment of language at more than 1 time point is necessary to ensure children whose language delays emerge later are identified and supported.
Supplemental Material
https://doi.org/10.23641/asha.6879719

from #Audiology via ola Kala on Inoreader https://ift.tt/2McWmjk
via IFTTT

Positive Social Interaction and Hearing Loss in Older Adults Living in Rural and Urban Communities

Purpose
This study explored the extent to which hearing loss affected positive social interactions in older adults living in rural and urban communities.
Method
Pure-tone behavioral hearing assessments were administered to 80 adults 60 years of age or older. In addition, all participants completed 2 questionnaires, the Medical Outcomes Study Social Support Survey (Sherbourne & Stewart, 1991) and the Patient Health Questionnaire–Ninth Edition (Kroenke, Spitzer, & Williams, 2001).
Results
The preliminary findings suggested that adults with hearing loss living in rural towns had poorer positive social interactions compared with their urban counterparts with hearing loss. Also, adults with hearing loss living in rural towns had more symptoms of depression than adults with normal hearing who lived in these same geographical regions.
Conclusions
These preliminary findings could indicate that older adults with hearing loss living in rural communities will face more isolation than adults with hearing loss living in urban settings. Increasing our understanding of the extent of social isolation in adults with hearing loss living in rural and urban populations will be necessary.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OByw29
via IFTTT

Fibroblast growth factor 12 is expressed in spiral and vestibular ganglia and necessary for auditory and equilibrium function.

Related Articles

Fibroblast growth factor 12 is expressed in spiral and vestibular ganglia and necessary for auditory and equilibrium function.

Sci Rep. 2018 Jul 31;8(1):11491

Authors: Hanada Y, Nakamura Y, Ozono Y, Ishida Y, Takimoto Y, Taniguchi M, Ohata K, Koyama Y, Imai T, Morihana T, Kondo M, Sato T, Inohara H, Shimada S

Abstract
We investigated fibroblast growth factor 12 (FGF12) as a transcript enriched in the inner ear by searching published cDNA library databases. FGF12 is a fibroblast growth factor homologous factor, a subset of the FGF superfamily. To date, its localisation and function in the inner ear have not been determined. Here, we show that FGF12 mRNA is localised in spiral ganglion neurons (SGNs) and the vestibular ganglion. We also show that FGF12 protein is localised in SGNs, the vestibular ganglion, and nerve fibres extending beneath hair cells. Moreover, we investigated FGF12 function in auditory and vestibular systems using Fgf12-knockout (FGF12-KO) mice generated with CRISPR/Cas9 technology. Our results show that the inner ear morphology of FGF12-KO mice is not significantly different compared with wild-type mice. However, FGF12-KO mice exhibited an increased hearing threshold, as measured by the auditory brainstem response, as well as deficits in rotarod and balance beam performance tests. These results suggest that FGF12 is necessary for normal auditory and equilibrium function.

PMID: 30065296 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2AJtONd
via IFTTT

Fibroblast growth factor 12 is expressed in spiral and vestibular ganglia and necessary for auditory and equilibrium function.

Related Articles

Fibroblast growth factor 12 is expressed in spiral and vestibular ganglia and necessary for auditory and equilibrium function.

Sci Rep. 2018 Jul 31;8(1):11491

Authors: Hanada Y, Nakamura Y, Ozono Y, Ishida Y, Takimoto Y, Taniguchi M, Ohata K, Koyama Y, Imai T, Morihana T, Kondo M, Sato T, Inohara H, Shimada S

Abstract
We investigated fibroblast growth factor 12 (FGF12) as a transcript enriched in the inner ear by searching published cDNA library databases. FGF12 is a fibroblast growth factor homologous factor, a subset of the FGF superfamily. To date, its localisation and function in the inner ear have not been determined. Here, we show that FGF12 mRNA is localised in spiral ganglion neurons (SGNs) and the vestibular ganglion. We also show that FGF12 protein is localised in SGNs, the vestibular ganglion, and nerve fibres extending beneath hair cells. Moreover, we investigated FGF12 function in auditory and vestibular systems using Fgf12-knockout (FGF12-KO) mice generated with CRISPR/Cas9 technology. Our results show that the inner ear morphology of FGF12-KO mice is not significantly different compared with wild-type mice. However, FGF12-KO mice exhibited an increased hearing threshold, as measured by the auditory brainstem response, as well as deficits in rotarod and balance beam performance tests. These results suggest that FGF12 is necessary for normal auditory and equilibrium function.

PMID: 30065296 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2AJtONd
via IFTTT

The Effect of Hearing Aid Bandwidth and Configuration of Hearing Loss on Bimodal Speech Recognition in Cochlear Implant Users

Objectives: (1) To determine the effect of hearing aid (HA) bandwidth on bimodal speech perception in a group of unilateral cochlear implant (CI) patients with diverse degrees and configurations of hearing loss in the nonimplanted ear, (2) to determine whether there are demographic and audiometric characteristics that would help to determine the appropriate HA bandwidth for a bimodal patient. Design: Participants were 33 experienced bimodal device users with postlingual hearing loss. Twenty three of them had better speech perception with the CI than the HA (CI>HA group) and 10 had better speech perception with the HA than the CI (HA>CI group). Word recognition in sentences (AzBio sentences at +10 dB signal to noise ratio presented at 0° azimuth) and in isolation [CNC (consonant-nucleus-consonant) words] was measured in unimodal conditions [CI alone or HAWB, which indicates HA alone in the wideband (WB) condition] and in bimodal conditions (BMWB, BM2k, BM1k, and BM500) as the bandwidth of an actual HA was reduced from WB to 2 kHz, 1 kHz, and 500 Hz. Linear mixed-effect modeling was used to quantify the relationship between speech recognition and listening condition and to assess how audiometric or demographic covariates might influence this relationship in each group. Results: For the CI>HA group, AzBio scores were significantly higher (on average) in all bimodal conditions than in the best unimodal condition (CI alone) and were highest at the BMWB condition. For CNC scores, on the other hand, there was no significant improvement over the CI-alone condition in any of the bimodal conditions. The opposite pattern was observed in the HA>CI group. CNC word scores were significantly higher in the BM2k and BMWB conditions than in the best unimodal condition (HAWB), but none of the bimodal conditions were significantly better than the best unimodal condition for AzBio sentences (and some of the restricted bandwidth conditions were actually worse). Demographic covariates did not interact significantly with bimodal outcomes, but some of the audiometric variables did. For CI>HA participants with a flatter audiometric configuration and better mid-frequency hearing, bimodal AzBio scores were significantly higher than the CI-alone score with the WB setting (BMWB) but not with other bandwidths. In contrast, CI>HA participants with more steeply sloping hearing loss and poorer mid-frequency thresholds (≥82.5 dB) had significantly higher bimodal AzBio scores in all bimodal conditions, and the BMWB did not differ significantly from the restricted bandwidth conditions. HA>CI participants with mild low-frequency hearing loss showed the highest levels of bimodal improvement over the best unimodal condition on CNC words. They were also less affected by HA bandwidth reduction compared with HA>CI participants with poorer low-frequency thresholds. Conclusions: The pattern of bimodal performance as a function of the HA bandwidth was found to be consistent with the degree and configuration of hearing loss for both patients with CI>HA performance and for those with HA>CI performance. Our results support fitting the HA for all bimodal patients with the widest bandwidth consistent with effective audibility. ACKNOWLEDGMENTS: The authors are grateful to Keena Seward and Margaret Miller with assistance on the project. Elad Sagi provided helpful comments on a draft of the manuscript. The authors thank Lisa Potts for providing the CNC-30 test materials. Siemens Hearing Instruments provided the hearing aids used for the study. This research was supported by grant number 1R01DC011329 from the National Institutes of Health/National Institute on Deafness and Other Communication Disorders. This research was also supported by our department, which has a research contract with Cochlear Americas (PI: J. Thomas Roland, Jr). Dr. Svirsky has had research or consulting agreements with Cochlear Americas, Advanced Bionics, and Med-El. Data from some of the participants were included in a previous publication (Neuman & Svirsky 2013). The authors have no conflicts of interest to disclose. Address for correspondence: Mario A. Svirsky, Department of Otolaryngology, New York University School of Medicine, 550 First Avenue (NBV 5E5), New York, NY 10016, USA. E-mail: mario.svirsky@nyumc.org Received August 30, 2017; accepted May 27, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2v5yMhV
via IFTTT

Evaluation of the Optimized Pitch and Language Strategy in Cochlear Implant Recipients

Objectives: The Optimized Pitch and Language (OPAL) strategy enhances pitch perception through coding of fundamental frequency (F0) amplitude modulation information in the stimulus envelope delivered to a cochlear implant. Previous research using a prototype of the strategy demonstrated significant benefits in musical pitch and lexical tone discrimination tasks with no degradation in speech recognition when compared with the clinical Advanced Combination Encoder (ACE) strategy in a small group of subjects. Based on those studies, a modified version of the strategy was implemented in the commercial Nucleus CP900 series processor. The aims of the present study were to establish whether the CP900 OPAL implementation continued to provide improved F0 pitch perception in a speech intonation task with no degradation to speech perception in quiet and noise, when compared with the clinical ACE strategy in a larger cohort of subjects. Further aims were to evaluate fitting procedures and subject acclimatization to the strategy after take-home experience. Design: Twenty experienced adult cochlear implant recipients were enrolled in the study. Two subjects withdrew during the study leaving 18 sets of data for analysis. A repeated-measures single-subject design with take-home experience was used to test for improved speech intonation perception using OPAL compared with ACE and for comparable performance between strategies for open-set word recognition in quiet at two presentation levels, sentence recognition in adaptive 4-talker babble noise, and speech intelligibility ratings. The stimulation rate employed for OPAL was 1200 pulses per second/channel which was higher than the default clinical rate of 900 pulses per second/channel used for ACE by all subjects in the present study. Two variations of the OPAL “F0 restore gain” (the gain applied to restore the loudness of modulated channels) were investigated: “custom” measured per subject and “default” which was the average of all subject custom gains. Results: A significant group mean benefit on the intonation test of 8.5% points was shown for OPAL compared with ACE. There was a significant period of adaptation to OPAL with significantly poorer sentence in noise scores acutely and after only 2 weeks of take-home experience. After 4 weeks of take-home experience, comparable word perception in quiet and sentence perception in noise for OPAL were obtained. Furthermore, there was good subject acceptability in the field with comparable speech intelligibility ratings between strategies. Results of the fitting procedure showed that OPAL did not require any additional steps compared with fitting of ACE. A default F0 restore gain provided comparable outcomes to a custom gain setting. Conclusions: The CP900 OPAL implementation provided a significant benefit to perception of speech intonation when compared with ACE. Comparable speech perception (in quiet and noise) and subjective ratings of speech intelligibility between strategies were also achieved after a period of acclimatization. These outcomes are consistent with results of earlier studies using prototype versions of the strategy and reaffirm its potential for improvement of F0 pitch perception in speech while preserving coding of segmental speech information. Furthermore, the OPAL strategy can be programmed into subject’s processors using the same fitting procedures used for ACE thereby simplifying its adoption in clinical settings. ACKNOWLEDGMENTS: The Cooperative Research Centres (CRC) Programme supports industry-led end-user-driven research collaborations to address the major challenges facing Australia. In addition, the authors acknowledge the support of the Bionics Institute and the support it receives from the Victorian Government through its Operational Infrastructure Support Program. The authors also wish to thank the research volunteers that generously donated their time to partake in this study, and Dr Brett Swanson of Cochlear Ltd. for his comments on the research. The authors also acknowledge the support of Dr Adrienne Paterson of the HEARing CRC for management of ethical submissions, and Sylvia Tari and Alex Rousset from the Royal Victorian Eye and Ear Hospital (RVEEH) Cochlear Implant Clinic for assistance with recruitment. All authors except for A.A. contributed to study conception and design. P.D. and A.A. contributed to acquisition of data and statistical analysis. All authors except Y.Y. discussed the results and implications and contributed to writing of the manuscript. This research was supported by the HEARing CRC, established under the Cooperative Research Centres (CRC) Programme. The authors have no conflicts of interest to disclose. Address for correspondence: Andrew Vandali, The Hearing CRC, 550 Swanston Street, Carlton, Victoria 3053, Australia. E-mail: andrewev@unimelb.edu.au Received December 6, 2017; accepted May 8, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Mc8bGo
via IFTTT

The Effect of Hearing Aid Bandwidth and Configuration of Hearing Loss on Bimodal Speech Recognition in Cochlear Implant Users

Objectives: (1) To determine the effect of hearing aid (HA) bandwidth on bimodal speech perception in a group of unilateral cochlear implant (CI) patients with diverse degrees and configurations of hearing loss in the nonimplanted ear, (2) to determine whether there are demographic and audiometric characteristics that would help to determine the appropriate HA bandwidth for a bimodal patient. Design: Participants were 33 experienced bimodal device users with postlingual hearing loss. Twenty three of them had better speech perception with the CI than the HA (CI>HA group) and 10 had better speech perception with the HA than the CI (HA>CI group). Word recognition in sentences (AzBio sentences at +10 dB signal to noise ratio presented at 0° azimuth) and in isolation [CNC (consonant-nucleus-consonant) words] was measured in unimodal conditions [CI alone or HAWB, which indicates HA alone in the wideband (WB) condition] and in bimodal conditions (BMWB, BM2k, BM1k, and BM500) as the bandwidth of an actual HA was reduced from WB to 2 kHz, 1 kHz, and 500 Hz. Linear mixed-effect modeling was used to quantify the relationship between speech recognition and listening condition and to assess how audiometric or demographic covariates might influence this relationship in each group. Results: For the CI>HA group, AzBio scores were significantly higher (on average) in all bimodal conditions than in the best unimodal condition (CI alone) and were highest at the BMWB condition. For CNC scores, on the other hand, there was no significant improvement over the CI-alone condition in any of the bimodal conditions. The opposite pattern was observed in the HA>CI group. CNC word scores were significantly higher in the BM2k and BMWB conditions than in the best unimodal condition (HAWB), but none of the bimodal conditions were significantly better than the best unimodal condition for AzBio sentences (and some of the restricted bandwidth conditions were actually worse). Demographic covariates did not interact significantly with bimodal outcomes, but some of the audiometric variables did. For CI>HA participants with a flatter audiometric configuration and better mid-frequency hearing, bimodal AzBio scores were significantly higher than the CI-alone score with the WB setting (BMWB) but not with other bandwidths. In contrast, CI>HA participants with more steeply sloping hearing loss and poorer mid-frequency thresholds (≥82.5 dB) had significantly higher bimodal AzBio scores in all bimodal conditions, and the BMWB did not differ significantly from the restricted bandwidth conditions. HA>CI participants with mild low-frequency hearing loss showed the highest levels of bimodal improvement over the best unimodal condition on CNC words. They were also less affected by HA bandwidth reduction compared with HA>CI participants with poorer low-frequency thresholds. Conclusions: The pattern of bimodal performance as a function of the HA bandwidth was found to be consistent with the degree and configuration of hearing loss for both patients with CI>HA performance and for those with HA>CI performance. Our results support fitting the HA for all bimodal patients with the widest bandwidth consistent with effective audibility. ACKNOWLEDGMENTS: The authors are grateful to Keena Seward and Margaret Miller with assistance on the project. Elad Sagi provided helpful comments on a draft of the manuscript. The authors thank Lisa Potts for providing the CNC-30 test materials. Siemens Hearing Instruments provided the hearing aids used for the study. This research was supported by grant number 1R01DC011329 from the National Institutes of Health/National Institute on Deafness and Other Communication Disorders. This research was also supported by our department, which has a research contract with Cochlear Americas (PI: J. Thomas Roland, Jr). Dr. Svirsky has had research or consulting agreements with Cochlear Americas, Advanced Bionics, and Med-El. Data from some of the participants were included in a previous publication (Neuman & Svirsky 2013). The authors have no conflicts of interest to disclose. Address for correspondence: Mario A. Svirsky, Department of Otolaryngology, New York University School of Medicine, 550 First Avenue (NBV 5E5), New York, NY 10016, USA. E-mail: mario.svirsky@nyumc.org Received August 30, 2017; accepted May 27, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2v5yMhV
via IFTTT

Evaluation of the Optimized Pitch and Language Strategy in Cochlear Implant Recipients

Objectives: The Optimized Pitch and Language (OPAL) strategy enhances pitch perception through coding of fundamental frequency (F0) amplitude modulation information in the stimulus envelope delivered to a cochlear implant. Previous research using a prototype of the strategy demonstrated significant benefits in musical pitch and lexical tone discrimination tasks with no degradation in speech recognition when compared with the clinical Advanced Combination Encoder (ACE) strategy in a small group of subjects. Based on those studies, a modified version of the strategy was implemented in the commercial Nucleus CP900 series processor. The aims of the present study were to establish whether the CP900 OPAL implementation continued to provide improved F0 pitch perception in a speech intonation task with no degradation to speech perception in quiet and noise, when compared with the clinical ACE strategy in a larger cohort of subjects. Further aims were to evaluate fitting procedures and subject acclimatization to the strategy after take-home experience. Design: Twenty experienced adult cochlear implant recipients were enrolled in the study. Two subjects withdrew during the study leaving 18 sets of data for analysis. A repeated-measures single-subject design with take-home experience was used to test for improved speech intonation perception using OPAL compared with ACE and for comparable performance between strategies for open-set word recognition in quiet at two presentation levels, sentence recognition in adaptive 4-talker babble noise, and speech intelligibility ratings. The stimulation rate employed for OPAL was 1200 pulses per second/channel which was higher than the default clinical rate of 900 pulses per second/channel used for ACE by all subjects in the present study. Two variations of the OPAL “F0 restore gain” (the gain applied to restore the loudness of modulated channels) were investigated: “custom” measured per subject and “default” which was the average of all subject custom gains. Results: A significant group mean benefit on the intonation test of 8.5% points was shown for OPAL compared with ACE. There was a significant period of adaptation to OPAL with significantly poorer sentence in noise scores acutely and after only 2 weeks of take-home experience. After 4 weeks of take-home experience, comparable word perception in quiet and sentence perception in noise for OPAL were obtained. Furthermore, there was good subject acceptability in the field with comparable speech intelligibility ratings between strategies. Results of the fitting procedure showed that OPAL did not require any additional steps compared with fitting of ACE. A default F0 restore gain provided comparable outcomes to a custom gain setting. Conclusions: The CP900 OPAL implementation provided a significant benefit to perception of speech intonation when compared with ACE. Comparable speech perception (in quiet and noise) and subjective ratings of speech intelligibility between strategies were also achieved after a period of acclimatization. These outcomes are consistent with results of earlier studies using prototype versions of the strategy and reaffirm its potential for improvement of F0 pitch perception in speech while preserving coding of segmental speech information. Furthermore, the OPAL strategy can be programmed into subject’s processors using the same fitting procedures used for ACE thereby simplifying its adoption in clinical settings. ACKNOWLEDGMENTS: The Cooperative Research Centres (CRC) Programme supports industry-led end-user-driven research collaborations to address the major challenges facing Australia. In addition, the authors acknowledge the support of the Bionics Institute and the support it receives from the Victorian Government through its Operational Infrastructure Support Program. The authors also wish to thank the research volunteers that generously donated their time to partake in this study, and Dr Brett Swanson of Cochlear Ltd. for his comments on the research. The authors also acknowledge the support of Dr Adrienne Paterson of the HEARing CRC for management of ethical submissions, and Sylvia Tari and Alex Rousset from the Royal Victorian Eye and Ear Hospital (RVEEH) Cochlear Implant Clinic for assistance with recruitment. All authors except for A.A. contributed to study conception and design. P.D. and A.A. contributed to acquisition of data and statistical analysis. All authors except Y.Y. discussed the results and implications and contributed to writing of the manuscript. This research was supported by the HEARing CRC, established under the Cooperative Research Centres (CRC) Programme. The authors have no conflicts of interest to disclose. Address for correspondence: Andrew Vandali, The Hearing CRC, 550 Swanston Street, Carlton, Victoria 3053, Australia. E-mail: andrewev@unimelb.edu.au Received December 6, 2017; accepted May 8, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Mc8bGo
via IFTTT