Δευτέρα 30 Οκτωβρίου 2017

A New Age of Communication Powered by Purposeful Innovation

This course covers Phonak's latest product introductions as well as software and programming details to promote successful patient hearing instrument fittings.

from #Audiology via ola Kala on Inoreader http://ift.tt/2gXqXHp
via IFTTT

A New Age of Communication Powered by Purposeful Innovation

This course covers Phonak's latest product introductions as well as software and programming details to promote successful patient hearing instrument fittings.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2gXqXHp
via IFTTT

A New Age of Communication Powered by Purposeful Innovation

This course covers Phonak's latest product introductions as well as software and programming details to promote successful patient hearing instrument fittings.

from #Audiology via ola Kala on Inoreader http://ift.tt/2gXqXHp
via IFTTT

Language Outcomes in Children Who Are Deaf and Hard of Hearing: The Role of Language Ability Before Hearing Aid Intervention

Purpose
Early auditory experiences are fundamental in infant language acquisition. Research consistently demonstrates the benefits of early intervention (i.e., hearing aids) to language outcomes in children who are deaf and hard of hearing. The nature of these benefits and their relation with prefitting development are, however, not well understood.
Method
This study examined Ontario Infant Hearing Program birth cohorts to explore predictors of performance on the Preschool Language Scale–Fourth Edition at the time of (N = 47) and after (N = 19) initial hearing aid intervention.
Results
Regression analyses revealed that, before the hearing aid fitting, severity of hearing loss negatively predicted 19% and 10% of the variance in auditory comprehension and expressive communication, respectively. After hearing aid fitting, children's standard scores on language measures remained stable, but they made significant improvement in their progress values, which represent individual skills acquired on the test, rather than standing relative to same-age peers. Magnitude of change in progress values was predicted by a negative interaction of prefitting language ability and severity of hearing loss for the Auditory Comprehension scale.
Conclusions
These findings highlight the importance of considering a child's prefitting language ability in interpreting eventual language outcomes. Possible mechanisms of hearing aid benefit are discussed.
Supplemental Materials
http://ift.tt/2iPlF0N

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0222/2661522/Language-Outcomes-in-Children-Who-Are-Deaf-and
via IFTTT

Associations Between the 2D:4D Proxy Biomarker for Prenatal Hormone Exposures and Symptoms of Developmental Language Disorder

Purpose
Relative lengths of the index (2D) and ring (4D) fingers in humans represent a retrospective biomarker of prenatal hormonal exposures. For this reason, the 2D:4D digit ratio can be used to investigate potential hormonal contributions to the etiology of neurodevelopmental disorders. This study tested potential group differences in 2D:4D digit ratios in a sample of boys with and without developmental language disorder (DLD) and examined the strength of associations between 2D:4D digit ratio and a battery of verbal and nonverbal measures.
Method
A group of 29 boys affected by DLD and a group of 76 boys with typical language abilities participated (age range = 5;6–11;0 years). Scanned images were used to measure finger lengths. Language measures included the core language subtests from the Clinical Evaluation of Language Fundamentals–Fourth Edition (Semel, Wiig, & Secord, 2003), a nonword repetition task, a sentence recall task, and the Test of Early Grammatical Impairment (Rice & Wexler, 2001).
Results
Significant group differences indicated lower 2D:4D digit ratios in the group with DLD. Modest associations were found between 2D:4D digit ratios and some Clinical Evaluation of Language Fundamentals–Fourth Edition subtests.
Conclusions
Prenatal hormone exposures may play a role in the etiology of some language symptoms.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0143/2661523/Associations-Between-the-2D4D-Proxy-Biomarker-for
via IFTTT

Predicting Intelligibility Gains in Individuals With Dysarthria From Baseline Speech Features

Purpose
Across the treatment literature, behavioral speech modifications have produced variable intelligibility changes in speakers with dysarthria. This study is the first of two articles exploring whether measurements of baseline speech features can predict speakers’ responses to these modifications.
Methods
Fifty speakers (7 older individuals and 43 speakers with dysarthria) read a standard passage in habitual, loud, and slow speaking modes. Eighteen listeners rated how easy the speech samples were to understand. Baseline acoustic measurements of articulation, prosody, and voice quality were collected with perceptual measures of severity.
Results
Cues to speak louder and reduce rate did not confer intelligibility benefits to every speaker. The degree to which cues to speak louder improved intelligibility could be predicted by speakers' baseline articulation rates and overall dysarthria severity. Improvements in the slow condition could be predicted by speakers' baseline severity and temporal variability. Speakers with a breathier voice quality tended to perform better in the loud condition than in the slow condition.
Conclusions
Assessments of baseline speech features can be used to predict appropriate treatment strategies for speakers with dysarthria. Further development of these assessments could provide the basis for more individualized treatment programs.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2016_JSLHR-S-16-0218/2661025/Predicting-Intelligibility-Gains-in-Individuals
via IFTTT

Predicting Intelligibility Gains in Dysarthria Through Automated Speech Feature Analysis

Purpose
Behavioral speech modifications have variable effects on the intelligibility of speakers with dysarthria. In the companion article, a significant relationship was found between measures of speakers' baseline speech and their intelligibility gains following cues to speak louder and reduce rate (Fletcher, McAuliffe, Lansford, Sinex, & Liss, 2017). This study reexamines these features and assesses whether automated acoustic assessments can also be used to predict intelligibility gains.
Method
Fifty speakers (7 older individuals and 43 with dysarthria) read a passage in habitual, loud, and slow speaking modes. Automated measurements of long-term average spectra, envelope modulation spectra, and Mel-frequency cepstral coefficients were extracted from short segments of participants' baseline speech. Intelligibility gains were statistically modeled, and the predictive power of the baseline speech measures was assessed using cross-validation.
Results
Statistical models could predict the intelligibility gains of speakers they had not been trained on. The automated acoustic features were better able to predict speakers' improvement in the loud condition than the manual measures reported in the companion article.
Conclusions
These acoustic analyses present a promising tool for rapidly assessing treatment options. Automated measures of baseline speech patterns may enable more selective inclusion criteria and stronger group outcomes within treatment studies.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0453/2661026/Predicting-Intelligibility-Gains-in-Dysarthria
via IFTTT

Language Outcomes in Children Who Are Deaf and Hard of Hearing: The Role of Language Ability Before Hearing Aid Intervention

Purpose
Early auditory experiences are fundamental in infant language acquisition. Research consistently demonstrates the benefits of early intervention (i.e., hearing aids) to language outcomes in children who are deaf and hard of hearing. The nature of these benefits and their relation with prefitting development are, however, not well understood.
Method
This study examined Ontario Infant Hearing Program birth cohorts to explore predictors of performance on the Preschool Language Scale–Fourth Edition at the time of (N = 47) and after (N = 19) initial hearing aid intervention.
Results
Regression analyses revealed that, before the hearing aid fitting, severity of hearing loss negatively predicted 19% and 10% of the variance in auditory comprehension and expressive communication, respectively. After hearing aid fitting, children's standard scores on language measures remained stable, but they made significant improvement in their progress values, which represent individual skills acquired on the test, rather than standing relative to same-age peers. Magnitude of change in progress values was predicted by a negative interaction of prefitting language ability and severity of hearing loss for the Auditory Comprehension scale.
Conclusions
These findings highlight the importance of considering a child's prefitting language ability in interpreting eventual language outcomes. Possible mechanisms of hearing aid benefit are discussed.
Supplemental Materials
http://ift.tt/2iPlF0N

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0222/2661522/Language-Outcomes-in-Children-Who-Are-Deaf-and
via IFTTT

Associations Between the 2D:4D Proxy Biomarker for Prenatal Hormone Exposures and Symptoms of Developmental Language Disorder

Purpose
Relative lengths of the index (2D) and ring (4D) fingers in humans represent a retrospective biomarker of prenatal hormonal exposures. For this reason, the 2D:4D digit ratio can be used to investigate potential hormonal contributions to the etiology of neurodevelopmental disorders. This study tested potential group differences in 2D:4D digit ratios in a sample of boys with and without developmental language disorder (DLD) and examined the strength of associations between 2D:4D digit ratio and a battery of verbal and nonverbal measures.
Method
A group of 29 boys affected by DLD and a group of 76 boys with typical language abilities participated (age range = 5;6–11;0 years). Scanned images were used to measure finger lengths. Language measures included the core language subtests from the Clinical Evaluation of Language Fundamentals–Fourth Edition (Semel, Wiig, & Secord, 2003), a nonword repetition task, a sentence recall task, and the Test of Early Grammatical Impairment (Rice & Wexler, 2001).
Results
Significant group differences indicated lower 2D:4D digit ratios in the group with DLD. Modest associations were found between 2D:4D digit ratios and some Clinical Evaluation of Language Fundamentals–Fourth Edition subtests.
Conclusions
Prenatal hormone exposures may play a role in the etiology of some language symptoms.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0143/2661523/Associations-Between-the-2D4D-Proxy-Biomarker-for
via IFTTT

Predicting Intelligibility Gains in Individuals With Dysarthria From Baseline Speech Features

Purpose
Across the treatment literature, behavioral speech modifications have produced variable intelligibility changes in speakers with dysarthria. This study is the first of two articles exploring whether measurements of baseline speech features can predict speakers’ responses to these modifications.
Methods
Fifty speakers (7 older individuals and 43 speakers with dysarthria) read a standard passage in habitual, loud, and slow speaking modes. Eighteen listeners rated how easy the speech samples were to understand. Baseline acoustic measurements of articulation, prosody, and voice quality were collected with perceptual measures of severity.
Results
Cues to speak louder and reduce rate did not confer intelligibility benefits to every speaker. The degree to which cues to speak louder improved intelligibility could be predicted by speakers' baseline articulation rates and overall dysarthria severity. Improvements in the slow condition could be predicted by speakers' baseline severity and temporal variability. Speakers with a breathier voice quality tended to perform better in the loud condition than in the slow condition.
Conclusions
Assessments of baseline speech features can be used to predict appropriate treatment strategies for speakers with dysarthria. Further development of these assessments could provide the basis for more individualized treatment programs.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2016_JSLHR-S-16-0218/2661025/Predicting-Intelligibility-Gains-in-Individuals
via IFTTT

Predicting Intelligibility Gains in Dysarthria Through Automated Speech Feature Analysis

Purpose
Behavioral speech modifications have variable effects on the intelligibility of speakers with dysarthria. In the companion article, a significant relationship was found between measures of speakers' baseline speech and their intelligibility gains following cues to speak louder and reduce rate (Fletcher, McAuliffe, Lansford, Sinex, & Liss, 2017). This study reexamines these features and assesses whether automated acoustic assessments can also be used to predict intelligibility gains.
Method
Fifty speakers (7 older individuals and 43 with dysarthria) read a passage in habitual, loud, and slow speaking modes. Automated measurements of long-term average spectra, envelope modulation spectra, and Mel-frequency cepstral coefficients were extracted from short segments of participants' baseline speech. Intelligibility gains were statistically modeled, and the predictive power of the baseline speech measures was assessed using cross-validation.
Results
Statistical models could predict the intelligibility gains of speakers they had not been trained on. The automated acoustic features were better able to predict speakers' improvement in the loud condition than the manual measures reported in the companion article.
Conclusions
These acoustic analyses present a promising tool for rapidly assessing treatment options. Automated measures of baseline speech patterns may enable more selective inclusion criteria and stronger group outcomes within treatment studies.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0453/2661026/Predicting-Intelligibility-Gains-in-Dysarthria
via IFTTT

Language Outcomes in Children Who Are Deaf and Hard of Hearing: The Role of Language Ability Before Hearing Aid Intervention

Purpose
Early auditory experiences are fundamental in infant language acquisition. Research consistently demonstrates the benefits of early intervention (i.e., hearing aids) to language outcomes in children who are deaf and hard of hearing. The nature of these benefits and their relation with prefitting development are, however, not well understood.
Method
This study examined Ontario Infant Hearing Program birth cohorts to explore predictors of performance on the Preschool Language Scale–Fourth Edition at the time of (N = 47) and after (N = 19) initial hearing aid intervention.
Results
Regression analyses revealed that, before the hearing aid fitting, severity of hearing loss negatively predicted 19% and 10% of the variance in auditory comprehension and expressive communication, respectively. After hearing aid fitting, children's standard scores on language measures remained stable, but they made significant improvement in their progress values, which represent individual skills acquired on the test, rather than standing relative to same-age peers. Magnitude of change in progress values was predicted by a negative interaction of prefitting language ability and severity of hearing loss for the Auditory Comprehension scale.
Conclusions
These findings highlight the importance of considering a child's prefitting language ability in interpreting eventual language outcomes. Possible mechanisms of hearing aid benefit are discussed.
Supplemental Materials
http://ift.tt/2iPlF0N

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0222/2661522/Language-Outcomes-in-Children-Who-Are-Deaf-and
via IFTTT

Associations Between the 2D:4D Proxy Biomarker for Prenatal Hormone Exposures and Symptoms of Developmental Language Disorder

Purpose
Relative lengths of the index (2D) and ring (4D) fingers in humans represent a retrospective biomarker of prenatal hormonal exposures. For this reason, the 2D:4D digit ratio can be used to investigate potential hormonal contributions to the etiology of neurodevelopmental disorders. This study tested potential group differences in 2D:4D digit ratios in a sample of boys with and without developmental language disorder (DLD) and examined the strength of associations between 2D:4D digit ratio and a battery of verbal and nonverbal measures.
Method
A group of 29 boys affected by DLD and a group of 76 boys with typical language abilities participated (age range = 5;6–11;0 years). Scanned images were used to measure finger lengths. Language measures included the core language subtests from the Clinical Evaluation of Language Fundamentals–Fourth Edition (Semel, Wiig, & Secord, 2003), a nonword repetition task, a sentence recall task, and the Test of Early Grammatical Impairment (Rice & Wexler, 2001).
Results
Significant group differences indicated lower 2D:4D digit ratios in the group with DLD. Modest associations were found between 2D:4D digit ratios and some Clinical Evaluation of Language Fundamentals–Fourth Edition subtests.
Conclusions
Prenatal hormone exposures may play a role in the etiology of some language symptoms.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0143/2661523/Associations-Between-the-2D4D-Proxy-Biomarker-for
via IFTTT

Predicting Intelligibility Gains in Individuals With Dysarthria From Baseline Speech Features

Purpose
Across the treatment literature, behavioral speech modifications have produced variable intelligibility changes in speakers with dysarthria. This study is the first of two articles exploring whether measurements of baseline speech features can predict speakers’ responses to these modifications.
Methods
Fifty speakers (7 older individuals and 43 speakers with dysarthria) read a standard passage in habitual, loud, and slow speaking modes. Eighteen listeners rated how easy the speech samples were to understand. Baseline acoustic measurements of articulation, prosody, and voice quality were collected with perceptual measures of severity.
Results
Cues to speak louder and reduce rate did not confer intelligibility benefits to every speaker. The degree to which cues to speak louder improved intelligibility could be predicted by speakers' baseline articulation rates and overall dysarthria severity. Improvements in the slow condition could be predicted by speakers' baseline severity and temporal variability. Speakers with a breathier voice quality tended to perform better in the loud condition than in the slow condition.
Conclusions
Assessments of baseline speech features can be used to predict appropriate treatment strategies for speakers with dysarthria. Further development of these assessments could provide the basis for more individualized treatment programs.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2016_JSLHR-S-16-0218/2661025/Predicting-Intelligibility-Gains-in-Individuals
via IFTTT

Predicting Intelligibility Gains in Dysarthria Through Automated Speech Feature Analysis

Purpose
Behavioral speech modifications have variable effects on the intelligibility of speakers with dysarthria. In the companion article, a significant relationship was found between measures of speakers' baseline speech and their intelligibility gains following cues to speak louder and reduce rate (Fletcher, McAuliffe, Lansford, Sinex, & Liss, 2017). This study reexamines these features and assesses whether automated acoustic assessments can also be used to predict intelligibility gains.
Method
Fifty speakers (7 older individuals and 43 with dysarthria) read a passage in habitual, loud, and slow speaking modes. Automated measurements of long-term average spectra, envelope modulation spectra, and Mel-frequency cepstral coefficients were extracted from short segments of participants' baseline speech. Intelligibility gains were statistically modeled, and the predictive power of the baseline speech measures was assessed using cross-validation.
Results
Statistical models could predict the intelligibility gains of speakers they had not been trained on. The automated acoustic features were better able to predict speakers' improvement in the loud condition than the manual measures reported in the companion article.
Conclusions
These acoustic analyses present a promising tool for rapidly assessing treatment options. Automated measures of baseline speech patterns may enable more selective inclusion criteria and stronger group outcomes within treatment studies.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0453/2661026/Predicting-Intelligibility-Gains-in-Dysarthria
via IFTTT

Value of T1-weighted Magnetic Resonance Imaging in Cholesteatoma Detection.

Objective: To reveal the usefulness of T1-weighted (T1W) imaging on diagnostic magnetic resonance (MR) imaging for cholesteatoma. Study Design: A retrospective case review. Setting: Tertiary referral center. Patients: Fifty-three patients (57 ears) suspected to have cholesteatomas and treated (6-82 yr of age). Intervention: Preoperative MR imaging, including non-echo planar (non-EP) diffusion-weighted (DW) and T1W imaging. Main Outcome Measures: Primary outcome measures included the comparison between the diagnostic accuracy for the detection of cholesteatomas using non-EP DW imaging alone (criterion 1) and non-EP DW imaging along with T1W imaging (criterion 2). Diagnostic accuracy was evaluated in each case by comparing MR imaging with surgical findings. Secondary outcome measures included the comparison of the rates of cases showing a high T1W signal between cholesteatomas and noncholesteatomas which showed a high non-EP DW signal. Results: The sensitivity, specificity, and accuracy according to criterion 1 were 93.5, 63.6, and 87.7% and those according to criterion 2 were 89.1, 100, and 91.2%, respectively. Of 43 cholesteatoma cases indicating a high non-EP DW signal, only 2 cases showed a high T1W signal (5%). On the other hand, all four noncholesteatoma cases indicating high non-EP DW signal showed a high T1W signal (100%), and these rates were significantly different (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2A2jdrV
via IFTTT

Usefulness of Electrical Auditory Brainstem Responses to Assess the Functionality of the Cochlear Nerve Using an Intracochlear Test Electrode.

Objective: To use an intracochlear test electrode to assess the integrity and the functionality of the auditory nerve in cochlear implant (CI) recipients and to compare electrical auditory brainstem responses (eABR) via the test electrode with the eABR responses with the CI. Setting: Otolaryngology department, tertiary referral hospital. Patients: Ten subjects (age at implantation 55 yr, range, 19-72) were subsequently implanted with a MED-EL CONCERTO CI on the side without any useful residual hearing. Interventions: Following identification of the round window (RW), the test electrode was inserted in the cochlea previous to cochlear implantation. Main Outcome Measures: To assess the quality of an eABR waveform, scoring criteria from Walton et al. (2008) were chosen. The waveforms in each session were classified by detecting waves III and V by the algorithm and visual assessment of the waveform. Speech performance was evaluated with monosyllables, disyllables, and sentence recognition tests. Results: It was possible to evoke electrical stimulation responses along with both the test electrode and the CI in all subjects. No significant differences in latencies or amplitudes after stimulation were found between the test electrode and the CI. All subjects obtained useful hearing with their CI and use their implants daily. Conclusions: The intracochlear test electrode may be suitable to test the integrity of the auditory nerve by recording eABR signals. This allows for further research on the status of the auditory nerve after tumor removal and correlation with auditory performance. Copyright (C) 2017 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xC3vlY
via IFTTT

The Etiological Relationship Between Migraine and Sudden Hearing Loss.

Objectives: To investigate the relationship between sudden sensorineural hearing loss (SSNHL) and migraine, assess the prevalence of migraine in patients with idiopathic SSNHL, and determine a possible common vascular etiopathogenesis for migraine and SSNHL. Study Design: Prospective cohort study. Setting: Tertiary referral center. Patients: This study initially assessed 178 SSNHL cases obtained from the Head and Neck Surgery Clinic patient database at a tertiary hospital in Turkey between January 2011 and March 2016. Ultimately, a total of 61 idiopathic SSNHL patients participated in the present study. Interventions: Diagnostic. Main Outcome Measures: Cases with inflammation in the middle or inner ear; a retro cochlear tumor; autoimmune, infectious, functional, metabolic, neoplastic, traumatic, toxic, or vascular causes; Meniere's disease; otosclerosis; multiple sclerosis; and/or cerebrovascular diseases were excluded. Results: Of the 61 idiopathic SSHNL patients, 34 were women (55.74%); and 24 (39.34%) had migraine, according to the criteria of the International Headache Society (IHS). The mean age of the migraine patients (Group 1) was 43.83 +/- 13.16 years, and that of those without migraine (Group 2) was 51.05 +/- 16.49 years. The groups did not significantly differ in terms of age, sex, or SSNHL recovery rates according to the Siegel criteria (p > 0.05). Ten of the migraine patients experienced visual aura, and the recovery rates of this group were higher. Additionally, the rate of total hearing loss was lower in Group 1 (n = 3, 12.5%) than in Group 2 (n = 10, 27%). Conclusion: SSNHL patients had a higher prevalence of migraine. Although those with migraine had higher recovery rates, the differences were not statistically significant. Copyright (C) 2017 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2A1x5CU
via IFTTT

Therapeutic Mastoidectomy Does Not Increase Postoperative Complications in the Management of the Chronic Ear.

Objective: Tympanoplasty with or without concurrent therapeutic mastoidectomy is a controversial topic in the management of chronic ear disease. We sought to describe whether there is a significant difference in postoperative complications. Study Design: Retrospective cohort study. Setting: American College of Surgeons National Surgical Quality Improvement Program public files. Patients: Current procedural terminology codes were used to identify patients with chronic ear disease undergoing tympanoplasty +/- concurrent mastoidectomy in the 2011 to 14 American College of Surgeons National Surgical Quality Improvement Program files. Intervention: Therapeutic. Main Outcome Measures: Variables were compared with [chi]2, Fischer's exact, and Mann-Whitney U tests, as appropriate to analyze postoperative complications between tympanoplasty with or without concurrent mastoidectomy. To account for confounding factors, presence of a complication was analyzed in binary logistic regression. Analysis considered sex, hypertension, obesity, advanced age, diabetes, smoking status, American Society of Anesthesiologists Physical status, procedure. Results: There were 4,087 patients identified meeting criteria (tympanoplasty = 2,798, tympanomastoidectomy = 1,289). There was no statistical difference in postoperative complications (tympanoplasty n = 49 [1. 8%], tympanomastoidectomy n = 33 [2. 6%]; p = 0. 087) or return to the operating room (tympanoplasty = 4 [0. 1%], tympanomastoidectomy = 6 [0. 5%]; p = 0. 082). Binary logistic regression demonstrated smoking as a predictor of a postoperative complication (OR: 1. 758, 95% CI: 1. 084-2. 851; p = 0. 022), while concurrent mastoidectomy did not significantly increase the risk of complication (OR: 1. 440, 95% CI: 0. 915-2. 268; p = 0. 115). There was a significant difference in mean operative time between tympanoplasty and tympanomastoidectomy: 85.7 versus 154.23 min, p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xC3izc
via IFTTT

Intracochlear Measurements of Interaural Time and Level Differences Conveyed by Bilateral Bone Conduction Systems.

Hypothesis: Intracochlear pressures (PIC) and stapes velocity (Vstap) elicited by bilaterally placed bone-anchored hearing devices (BAHD) will be systematically modulated by imposed interaural time (ITD) and level differences (ILD), demonstrating the potential for users of bilateral BAHD to access these binaural cues. Background: BAHD are traditionally implanted unilaterally under the assumption that transcranial cross-talk limits interaural differences. Recent studies have demonstrated improvements in binaural and spatial performance with bilateral BAHD; however, objective measures of binaural cues from bilateral BAHDs are lacking. Methods: Bone-conduction transducers were coupled to both mastoids of cadaveric specimens via implanted titanium abutments. PIC and Vstap were measured using intracochlear pressure probes and laser Doppler vibrometry, respectively, during stimulation with pure-tone stimuli of varied frequency (250-4000 Hz) under ipsilateral, contralateral, and bilateral ITD (-1 to 1 ms) and ILD (-20 to 20 dB) conditions. Results: Bilateral stimulation produced constructive and destructive interference patterns that varied dramatically with ITD and stimulus frequency. Variation of ITD led to large variation of PIC and Vstap, with opposing effects in ipsilateral and contralateral ears expected to lead to "ITD to ILD conversion." Variation of ILD produced more straightforward (monotonic) variations of PIC and Vstap, with ipsilateral-favoring ILD producing higher PIC and Vstap than contralateral-favoring. Conclusion: Variation of ITDs and ILDs conveyed by BAHDs systematically modulated cochlear inputs. While transcranial cross-talk leads to complex interactions that depend on cue type and stimulus frequency, binaural disparities potentiate binaural benefit, providing a basis for improved sound localization and speech-in-noise perception. Copyright (C) 2017 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2A2iXZZ
via IFTTT

Evaluating Multipulse Integration as a Neural-Health Correlate in Human Cochlear Implant Users: Effects of Stimulation Mode

Abstract

Previous psychophysical studies have shown that a steep detection-threshold-versus-stimulation-rate function (multipulse integration; MPI) is associated with laterally positioned electrodes producing a broad neural excitation pattern. These findings are consistent with steep MPI depending on either a certain width of neural excitation allowing a large population of neurons operating at a low point on their dynamic range to respond to an increase in stimulation rate or a certain slope of excitation pattern that allows recruitment of neurons at the excitation periphery. Results of the current study provide additional support for these mechanisms by demonstrating significantly flattened MPI functions in narrow bipolar than monopolar stimulation. The study further examined the relationship between the steepness of the psychometric functions for detection (d’ versus log current level) and MPI. In contrast to findings in monopolar stimulation, current data measured in bipolar stimulation suggest that steepness of the psychometric functions explained a moderate amount of the across-site variance in MPI. Steepness of the psychometric functions, however, cannot explain why MPI flattened in bipolar stimulation, since slopes of the psychometric functions were comparable in the two stimulation modes. Lastly, our results show that across-site mean MPI measured in monopolar and bipolar stimulation correlated with speech recognition in opposite signs, with steeper monopolar MPI being associated with poorer performance but steeper bipolar MPI being associated with better performance. If steeper MPI requires broad stimulation of the cochlea, the correlation between monopolar MPI and speech recognition can be interpreted as the detrimental effect of poor spectral resolution on speech recognition. Assuming bipolar stimulation produces narrow excitation, and MPI measured in bipolar stimulation reflects primarily responses of the on-site neurons, the correlation between bipolar MPI and speech recognition can be understood in light of the importance of neural survival for speech recognition.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yXnOLZ
via IFTTT

Electrophysiological Evidence of the Basilar-Membrane Travelling Wave and Frequency Place Coding of Sound in Cochlear Implant Recipients

224213?imgType=4

Aim: To obtain direct evidence for the cochlear travelling wave in humans by performing electrocochleography from within the cochlea in subjects implanted with an auditory prosthesis. Background: Sound induces a travelling wave that propagates along the basilar membrane, exhibiting cochleotopic tuning with a frequency-dependent phase delay. To date, evoked potentials and psychophysical experiments have supported the presence of the travelling wave in humans, but direct measurements have not been made. Methods: Electrical potentials in response to rarefaction and condensation acoustic tone bursts were recorded from multiple sites along the human cochlea, directly from a cochlear implant electrode during, and immediately after, its insertion. These recordings were made from individuals with residual hearing. Results: Electrocochleography was recorded from 11 intracochlear electrodes in 7 ears from 6 subjects, with detectable responses on all electrodes in 5 ears. Cochleotopic tuning and frequency-dependent phase delay of the cochlear microphonic were demonstrated. The response latencies were slightly shorter than those anticipated which we attribute to the subjects' hearing loss. Conclusions: Direct evidence for the travelling wave was observed. Electrocochleography from cochlear implant electrodes provides site-specific information on hair cell and neural function of the cochlea with potential diagnostic value.
Audiol Neurotol 2017;22:180-189

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yfASzq
via IFTTT