Σάββατο 24 Ιουνίου 2017

Paget's Disease of the Temporal Bone: A Single-Institution Contemporary Review of 27 Patients

imageObjectives: To report a contemporary review from a single-institution series on Paget's disease of the temporal bone (PDTB). Study Design: Retrospective chart review of patients evaluated from 1998 to 2016. Setting: Quaternary referral center. Patients: Patients with radiographically confirmed PDTB. Main Outcome Measures: Clinical, audiological, and radiological features and management strategies of PDTB. Results: A total of 50 temporal bones in 27 patients (15 men) were diagnosed with PDTB. Symptoms at presentation included hearing loss (n = 23, 85%), headache (n = 18, 67%), dizziness (n = 14, 52%), tinnitus (n = 5, 19%), chronic otitis media (n = 2, 7%), hemifacial spasm without facial paralysis (n = 1, 4%), multiple cranial neuropathies (n = 1, 4%), and neoplastic transformation (n = 1, 4%). Of the 23 ears with audiometric data available for review, 65% exhibited sensorineural hearing loss, and 35% mixed hearing loss. Long-term audiometric follow-up was available on two patients, both of whom demonstrated hearing loss at a rate greater than would be expected for normal aging. Two patients underwent successful cochlear implantation, achieving open-set speech recognition. Radiographic features of temporal bone involvement are reviewed and illustrated. Conclusion: This is the largest single-institution clinical series examining patients with PDTB in the English literature. Variable patterns of temporal bone involvement by Paget's disease are observed leading to a diverse set of clinical symptoms, including slowly progressive hearing loss, tinnitus, compressive cranial neuropathies, and benign or malignant tumorigenesis. Involvement typically begins in the petrous apex and progresses laterally. Otic capsule bone demineralization occurs late in the disease process. Cochlear implantation appears to be an effective management strategy for patients with severe-to-profound hearing loss.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2r9zhrq
via IFTTT

Surgical Management of a Persistent Stapedial Artery: A Review

imageObjective: To evaluate the outcome and per- and postoperative complications of the surgical management of patients with a persistent stapedial artery (PSA). Methods: A systemic literature search for reports on patients treated for pulsatile tinnitus and/or conductive hearing loss caused by a PSA was conducted of the PubMed and Embase databases using the terms “stapedial” and “artery.” Inclusion criteria were adequate description of the intervention and pre- and postoperative signs and symptoms. In addition, one case of a PSA, treated at VU University Medical Center Amsterdam, The Netherlands, was included in this series. Intervention: Middle ear surgery consisting of stapedotomy or stapedectomy, and/or transection of the PSA. Main Outcome Measures: Pre- and postoperative hearing levels, pre- and postoperative pulsatile tinnitus, and per- and postoperative complications. Results: Seventeen patients and 18 operated ears were evaluated (16 patients described in 14 articles and our case). Twelve out of 14 ears in which a stapedotomy or stapedectomy was initiated experienced improvement in hearing. In four cases pulsatile tinnitus was described pre- and postoperatively. In all four, pulsatile tinnitus subsided after transection of the PSA. Peroperative bleeding from the PSA was described in four patients, which could be controlled during the procedure. No significant postoperative sequelae were reported. Conclusions: In case of a PSA, improvement of conductive hearing loss is best achieved by stapes surgery, while pulsatile tinnitus is effectively treated with transection of the PSA. To date no long-term postoperative complications have been reported.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2r1OFSn
via IFTTT

Preliminary Model for the Design of a Custom Middle Ear Prosthesis

imageHypothesis: Custom prostheses could be used to recreate the ossicular chain and improve hearing. Background: Ossicular discontinuity or fixation occurs in 55% of cases of conductive hearing loss, with most cases involving the incus. Reconstruction has been achieved by a variety of methods; however, there has been little improvement in hearing outcomes in decades. Methods: Precise measurements of anatomic dimensions, weight, and center of gravity were taken from 19 cadaveric incudes. These measurements were combined with measurements from the medical literature and micro-computed tomography (micro-CT) of cadaveric temporal bones to generate a rasterizable incus model. As a proof of concept, incudal replacements including possible anatomic variations were then three-dimensionally (3-D) printed and inserted into a cadaveric temporal bone. Results: Our measurements of cadaveric incudes corresponded well with those from the medical literature. These measurements were combined with anatomical information from micro-CT allowing identification of critical features of the incus, which remained constant. Other model features were modified to increase stability and facilitate synthesis, including broadening and thickening of the lenticular process and the incudomalleolar articulation. 3-D printed incudal replacements based on this model readily fit into a cadaveric temporal bone and successfully bridged the gap between malleus and incus. Conclusion: We have generated a model for custom 3-D synthesis of incudal prostheses. While current 3-D printing in biocompatible materials at the size required is limited, the technology is rapidly advancing, and 3-D printing of incudal replacements with polylactic acid (PLA) is of the correct size and shape.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rSo0c0
via IFTTT

Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT

imageHypothesis: The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. Background: An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Methods: Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Results: Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. Conclusion: There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rHGv4k
via IFTTT

Imaging Criteria to Predict Surgical Difficulties During Stapes Surgery

imageBackground and Purpose: Stapes surgery for otosclerosis can be challenging if access to the oval window niche is restricted. The aim of this study was to determine the accuracy of the computed tomographic (CT) scan in the evaluation of anatomical distances, and to analyze its reliability in predicting surgical technical difficulties. Material and Methods: A total of 96 patients (101 ears) were enrolled in a prospective study between 2012 and May 2015. During surgery, we evaluated the distance D1 between the stapes and the facial nerve, distance D2 between the promontory and the facial nerve after ablation of the superstructure, and the intraoperative discomfort of the surgeon. On preoperative CT scans, we measured the width and depth of the oval window niche, and the angle formed by two axes starting from the center-point of the footplate, the first tangential to the superior wall of the promontory, and the second tangential to the inferior wall of the fallopian canal. Results: Intraoperative distances D1 and D2 were correlated with the width of the oval window and with the facial-promontory angle measured on imaging. CT scan measurements of the facial-promontory angle and width of the oval window were associated with the degree of discomfort of the surgeon. The cut-off threshold for intraoperative subjective discomfort was computed as 1.1 mm for the width of the oval window niche, with a sensibility of 71% and a specificity of 84%. Conclusion: Preoperative imaging analysis of the oval window width and the facialpromontory angle can predict operative difficulty in otosclerosis surgery.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rHBg4z
via IFTTT

Pilot Electroacoustic Analyses of a Sample of Direct-to-Consumer Amplification Products

imageObjective: Recent national initiatives from the White House and Institute of Medicine have focused on strategies to increase the accessibility and affordability of hearing loss treatment given the average cost of $4700 for bilateral hearing aids. More affordable direct-to-consumer hearing technologies are increasingly gaining recognition, but the performance of these devices has been poorly studied. We investigated the technical and electroacoustic capabilities of several direct-to-consumer hearing devices to inform otolaryngologists who may be asked by patients to comment on these devices. Patients/Intervention: Nine direct-to-consumer hearing devices ranging in retail cost from $144.99 to $395.00 and one direct-to-consumer hearing device with a retail cost of $30.00. Main Outcome Measure: Electroacoustic results and simulated real-ear measurements. Main electroacoustic measures are frequency response, equivalent input noise, total harmonic distortion, and maximum output sound pressure level at 90 dB. Results: Five devices met all four electroacoustic tolerances presented in this study, two devices met three tolerances, one device met two tolerances, one device met one tolerance, and one device did not meet any tolerances. Nine devices were able to approximate five of nine National Acoustics Laboratories (NAL) targets within 10 dB while only three devices were able to approximate five of nine NAL targets within a more stringent 5 dB. Conclusion: While there is substantial heterogeneity among the selection of devices, certain direct-to-consumer hearing devices may be able to provide appropriate amplification to persons with mild-to-moderate hearing loss and serve as alternatives for hearing aids in specific cases.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rcpfk7
via IFTTT

The Normal Adult Human Internal Auditory Canal: A Volumetric Multidetector Computed Tomography Study

imageObjective: The purpose of this study was to demonstrate that volumetric analysis of multidetector computed tomography (CT) images can be used to calculate the volume of the adult human internal auditory canal (IAC) reproducibly, and to describe the range of normal IAC volumes in the adult population with subgroup analysis of sex, age, and laterality. Background: Previous studies of the IAC have typically used measurements in two dimensions or by using casts of cadavers to measure IAC volumes. This study is the first to report the normal ranges of IAC volumes measured by CT. Methods: Two hundred eighty-one CT scans were assessed. Of the CT scans that met the inclusion criteria, a software package was used to manually contour the IACs in each subject to calculate the volumes in cubic millimeters. Subgroup analysis of laterality, sex, and age was evaluated. Interobserver agreement was calculated for the first 59 patients (118 canals). Results: Two hundred fifty-nine scans (518 canals) met the inclusion criteria. The volumes ranged from 74 to 502 mm3, with no statistically significant difference between left and right (p value = 0.69). In males, the range of volumes measured 74 to 502 mm3 while in females it ranged from 78 to 416 mm3. Males had larger IAC volumes than females (Wilcoxon rank-sum test: S = 14,845.0, p value = 0.01 on the right, and S = 14,646, p value = 0.004 on the left). No correlation was found with age (Spearman: −0.10, p value = 0.09 on the right and −0.04, p value = 0.50 on the left). Excellent interobserver agreement was found. Conclusion: IAC volumes of normal adult subjects, measured by CT, were larger in males and not significantly different with respect to age or laterality.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rUhSzU
via IFTTT

Transcanal Endoscopic Ear Surgery for Excision of a Facial Nerve Venous Malformation With Interposition Nerve Grafting: A Case Report

imageObjective: To illustrate a novel approach for the surgical management of a venous malformation of the facial nerve, including interposition nerve grafting, via an exclusively transcanal endoscopic ear surgery (TEES) approach. Patient: Thirty nine-year-old woman with a preoperative House–Brackmann (HB) grade IV facial paresis secondary to a facial nerve tumor. Intervention(s): Surgical excision and interposition nerve graft via a transcanal endoscopic approach. Main Outcome Measure(s): Completeness of resection, approach morbidities, and facial nerve outcome. Results: The TEES approach provided wide exposure of the facial nerve from the geniculate ganglion through the mastoid segment. This visualization facilitated gross total tumor resection, incus interposition ossicular reconstruction, and placement of an interposition nerve graft. The nerve graft was positioned in the fallopian canal and was secured at both ends with surgicel. The patient had no postoperative complications. At 11-month follow-up her facial function had returned to HB grade IV. Conclusions: This is the first report of resecting a venous malformation of the facial nerve with concomitant interposition nerve graft reconstruction via an exclusively endoscopic approach. This report adds to the growing body of evidence that TEES can manage diverse middle ear and lateral skull base pathology. Additional studies are needed to fully elucidate the risk-benefit profile of this technique.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rSq5EG
via IFTTT

Endoscopic Infracochlear Approach for Drainage of Petrous Apex Cholesterol Granulomas: A Case Series

imageObjective: To describe the feasibility and technical nuances of a transcanal endoscopic infracochlear approach for drainage of petrous apex cholesterol granulomas. Study Design: Retrospective case review. Setting: Tertiary care university hospital. Patients: A 32-year-old man with bilateral petrous apex cholesterol granulomas and a 54-year-old man with a left-sided petrous apex granuloma each with symptoms necessitating surgical intervention. Interventions: Transcanal endoscopic infracochlear approach for drainage of the cholesterol granulomas. Main Outcome Measures: Operation efficacy, corridor size, and perioperative morbidity. Results: All three cholesterol granulomas were successful drained without violating the cochlea, jugular bulb, or carotid artery. The dimensions of the infracochlear surgical corridor measured 5 mm × 6 mm, 3.5 mm × 3.5 mm, and 6 mm × 4 mm, respectively. All corridors facilitated visualization within the cyst and allowed lyses of adhesions for additional cyst content eradication. All patients had resolution of their acute symptoms. Two of the three subjects had serviceable hearing before and after their procedures. One patient required revision surgery 2-months after their initial procedure secondary to recurrent symptoms from acute hemorrhage within the cyst cavity. The infracochlear tract in this patient was noted to be patent. Conclusions: A transcanal endoscopic infracochlear approach is feasible for the management of cholesterol granuloma. The surgical access was wide enough to introduce the endoscope into the petrous apex cavity in each case. Further studies are needed to compare the efficacy and perioperative morbidity versus the traditional postauricular transtemporal approaches.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rHGwoU
via IFTTT

Epidemiology of Dizzy Patient Population in a Neurotology Clinic and Predictors of Peripheral Etiology

imageObjective: To compare the proportion of peripheral versus nonperipheral dizziness etiologies among all patients, inclusive of those presenting primarily or as referrals, to rank diagnoses in order of frequency, to determine whether or not age and sex predict diagnosis, and to determine which subgroups tended to undergo formal vestibular testing. Study Design: Retrospective cohort. Setting: Academic neurotology clinic. Patients: Age greater than 18 neurotology clinic patients with the chief complaint of dizziness. Intervention(s): None. Main Outcome Measure(s): Age, sex, diagnosis, record of vestibular testing. Results: Two thousand seventy-nine patients were assigned 2,468 diagnoses, of which 57.7 and 42.3% were of peripheral and nonperipheral etiologies, respectively. The most common diagnoses were Ménière's (23.0%), vestibular migraine (19.3%), benign paroxysmal positional vertigo (BPPV) (19.1%), and central origin, nonmigraine (16.4%). Peripheral diagnoses are more likely to be found in men than in women (odds ratio [OR] 1.59). Peripheral diagnoses were most likely to be found in the 60 to 69 age group (OR 3.82). There was not a significant difference in rate of vestibular testing between women and men. Among patients with two diagnoses, the most common combinations were vestibular migraine and BPPV then vestibular migraine and Ménière's. Conclusions: A large proportion of patients seen for the chief complaint of dizziness in the neurotology clinic were found not to have a peripheral etiology of their symptoms. These data challenge a prevalent dogma that the most common causes of dizziness are peripheral: BPPV, vestibular neuritis, and Ménière's disease. Age and sex are statistically significant predictors of peripheral etiology of dizziness.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rHFJo6
via IFTTT

Progression of Unilateral Hearing Loss in Children With and Without Ipsilateral Cochlear Nerve Canal Stenosis: A Hazard Analysis

imageObjective: To investigate the risk of hearing loss progression in each ear among children with unilateral hearing loss associated with ipsilateral bony cochlear nerve canal (BCNC) stenosis. Setting: Tertiary pediatric referral center. Patients: Children diagnosed with unilateral hearing loss who had undergone temporal bone computed tomography imaging and had at least 6 months of follow-up audiometric testing were identified from a prospective audiological database. Interventions: Two pediatric radiologists blinded to affected ear evaluated imaging for temporal bone anomalies and measured bony cochlear canal width independently. All available audiograms were reviewed, and air conduction thresholds were documented. Main Outcome Measure: Progression of hearing loss was defined by a 10 dB increase in air conduction pure-tone average. Results: One hundred twenty eight children met inclusion criteria. Of these, 54 (42%) had a temporal bone anomaly, and 22 (17%) had ipsilateral BCNC stenosis. At 12 months, rates of progression in the ipsilateral ear were as follows: 12% among those without a temporal bone anomaly, 13% among those with a temporal bone anomaly, and 17% among those with BCNC stenosis. Children with BCNC stenosis had a significantly greater risk of progression in their ipsilateral ear compared with children with no stenosis: hazard ratio (HR) 2.17, 95% confidence interval (CI) (1.01, 4.66), p value 0.046. When we compared children with BCNC stenosis to those with normal temporal bone imaging, we found that the children with stenosis had nearly two times greater risk estimate for progression, but this difference did not reach significance, HR 1.9, CI (0.8, 4.3), p = 0.1. No children with BCNC stenosis developed hearing loss in their contralateral year by 12 months of follow-up. Conclusion: Children with bony cochlear nerve canal stenosis may be at increased risk for progression in their ipsilateral ear. Audiometric and medical follow-up for these children should be considered.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rSL22k
via IFTTT

Single-Center Study Investigating Foreign Language Acquisition at School in Children, Adolescents, and Young Adults With Uni- or Bilateral Cochlear Implants in the Swiss German Population

imageObjective: To evaluate foreign language acquisition at school in cochlear implant patients. Study Design: Cohort study. Setting: CI center. Patients: Forty three cochlear implants (CI) patients (10–18 yr) were evaluated. CI nonusers and patients with CI-explantation, incomplete datasets, mental retardation, or concomitant medical disorders were excluded. Intervention(s): Additional data (type of schooling, foreign language learning, and bilingualism) were obtained with questionnaires. German-speaking children with foreign tuition language (English and/or French) at school were enrolled for further testing. Main Outcome Measure(s): General patient data, auditory data, and foreign language data from both questionnaires and tests were collected and analyzed. Results: Thirty seven out of 43 questionnaires (86%) were completed. Sixteen (43%) were in mainstream education. Twenty-seven CI users (73%) have foreign language learning at school. Fifteen of these were in mainstream education (55%), others in special schooling. From 10 CI users without foreign language learning, one CI user was in mainstream education (10%) and nine patients (90%) were in special schooling. Eleven German-speaking CI users were further tested in English and six additionally in French. For reading skills, the school objectives for English were reached in 7 of 11 pupils (64%) and for French in 3 of 6 pupils (50%). For listening skills, 3 of 11 pupils (27%) reached the school norm in English and none in French. Conclusions: Almost 75% of our CI users learn foreign language(s) at school. A small majority of the tested CI users reached the current school norm for in English and French in reading skills, whereas for hearing skills most of them were not able to reach the norm.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rU6cgu
via IFTTT

Cochlear Implantation in Adults With Asymmetric Hearing Loss: Benefits of Bimodal Stimulation

imageObjective: This study addresses the outcome of cochlear implantation in addition to hearing aid use in patients with asymmetric sensorineural hearing loss. Study Design: Prospective longitudinal study. Setting: Tertiary referral center. Patients: Seven adults with asymmetric sensorineural hearing loss, i.e., less than 30% aided speech recognition in their worst hearing ear and 60 to 85% speech recognition in their best hearing ear. All patients had a postlingual onset of their hearing loss and less than 20 years of auditory deprivation of their worst hearing ear. Intervention: Cochlear implantation in the functionally deaf ear. Main Outcome Measures: Speech recognition in quiet, speech recognition in noise, spatial speech recognition, localization abilities, music appreciation, and quality of life. Measurements were performed before cochlear implantation and 3, 6, and 12 months after cochlear implantation. Results: Before cochlear implantation, the average speech recognition of the ear fitted with a hearing aid was 74%. Cochlear implantation eventually resulted in an average speech recognition of 75%. Bimodal stimulation yielded speech recognition scores of 82, 86, and 88% after 3, 6, and 12 months, respectively. At all time intervals, bimodal stimulation resulted in a significantly better speech recognition as compared with stimulation with only hearing aid or only cochlear implant (CI). Speech recognition in noise and spatial speech recognition significantly improved as well as the ability to localize sounds and the quality of life. Conclusion: This study demonstrated that patients are able to successfully integrate electrical stimulation with contralateral acoustic amplification and benefit from bimodal stimulation. Therefore, we think that cochlear implantation should be considered in this particular group of patients, even in the presence of substantial residual hearing on the contralateral side.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rStdR8
via IFTTT

Categorical loudness scaling in cochlear implant recipients.

Related Articles

Categorical loudness scaling in cochlear implant recipients.

Int J Audiol. 2017 Jun 22;:1-8

Authors: Busby PA, Au A

Abstract
OBJECTIVE: This study investigated categorical loudness scaling in a large group of cochlear implant (CI) recipients.
DESIGN: Categorical loudness was measured for individually determined sets of current amplitudes on apical, mid and basal electrodes of the Nucleus array.
STUDY SAMPLE: Thirty adult subjects implanted with the Nucleus CI.
RESULTS: Subjects were generally reliable in categorical loudness scaling. As expected, current levels eliciting the same loudness categories differed across subjects and electrodes in many cases. After scaling the electric levels to remove differences in dynamic ranges across subjects and electrodes, the across-subject loudness functions for the three electrodes were very similar.
CONCLUSIONS: Scaled electric current to remove differences in dynamic range, as implemented in the Nucleus processor, ensures uniform loudness across the array and CI recipients. The results also showed that categorical loudness scaling for electric stimulation was similar to that for acoustic stimulation in normal hearing subjects. These findings could be used as a guide for aligning electric and acoustic loudness in CI recipients with contralateral hearing.

PMID: 28639840 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBDXoR
via IFTTT

Categorical loudness scaling in cochlear implant recipients.

Related Articles

Categorical loudness scaling in cochlear implant recipients.

Int J Audiol. 2017 Jun 22;:1-8

Authors: Busby PA, Au A

Abstract
OBJECTIVE: This study investigated categorical loudness scaling in a large group of cochlear implant (CI) recipients.
DESIGN: Categorical loudness was measured for individually determined sets of current amplitudes on apical, mid and basal electrodes of the Nucleus array.
STUDY SAMPLE: Thirty adult subjects implanted with the Nucleus CI.
RESULTS: Subjects were generally reliable in categorical loudness scaling. As expected, current levels eliciting the same loudness categories differed across subjects and electrodes in many cases. After scaling the electric levels to remove differences in dynamic ranges across subjects and electrodes, the across-subject loudness functions for the three electrodes were very similar.
CONCLUSIONS: Scaled electric current to remove differences in dynamic range, as implemented in the Nucleus processor, ensures uniform loudness across the array and CI recipients. The results also showed that categorical loudness scaling for electric stimulation was similar to that for acoustic stimulation in normal hearing subjects. These findings could be used as a guide for aligning electric and acoustic loudness in CI recipients with contralateral hearing.

PMID: 28639840 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBDXoR
via IFTTT

SOX2 is required for inner ear neurogenesis.

Related Articles

SOX2 is required for inner ear neurogenesis.

Sci Rep. 2017 Jun 22;7(1):4086

Authors: Steevens AR, Sookiasian DL, Glatzer JC, Kiernan AE

Abstract
Neurons of the cochleovestibular ganglion (CVG) transmit hearing and balance information to the brain. During development, a select population of early otic progenitors express NEUROG1, delaminate from the otocyst, and coalesce to form the neurons that innervate all inner ear sensory regions. At present, the selection process that determines which otic progenitors activate NEUROG1 and adopt a neuroblast fate is incompletely understood. The transcription factor SOX2 has been implicated in otic neurogenesis, but its requirement in the specification of the CVG neurons has not been established. Here we tested SOX2's requirement during inner ear neuronal specification using a conditional deletion paradigm in the mouse. SOX2 deficiency at otocyst stages caused a near-absence of NEUROG1-expressing neuroblasts, increased cell death in the neurosensory epithelium, and significantly reduced the CVG volume. Interestingly, a milder decrease in neurogenesis was observed in heterozygotes, indicating SOX2 levels are important. Moreover, fate-mapping experiments revealed that the timing of SOX2 expression did not parallel the established vestibular-then-auditory sequence. These results demonstrate that SOX2 is required for the initial events in otic neuronal specification including expression of NEUROG1, although fate-mapping results suggest SOX2 may be required as a competence factor rather than a direct initiator of the neural fate.

PMID: 28642583 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBuFsU
via IFTTT

Scuba diving and otology: a systematic review with recommendations on diagnosis, treatment and post-operative care.

Related Articles

Scuba diving and otology: a systematic review with recommendations on diagnosis, treatment and post-operative care.

Diving Hyperb Med. 2017 Jun;47(2):97-109

Authors: Livingstone DM, Smith KA, Lange B

Abstract
Scuba diving is a popular recreational and professional activity with inherent risks. Complications related to barotrauma and decompression illness can pose significant morbidity to a diver's hearing and balance systems. The majority of dive-related injuries affect the head and neck, particularly the outer, middle and inner ear. Given the high incidence of otologic complications from diving, an evidence-based approach to the diagnosis and treatment of otic pathology is a necessity. We performed a systematic and comprehensive literature review including the pathophysiology, diagnosis, and treatment of otologic pathology related to diving. This included inner, middle, and outer ear anatomic subsites, as well as facial nerve complications, mal de debarquement syndrome, sea sickness and fitness to dive recommendations following otologic surgery. Sixty-two papers on diving and otologic pathology were included in the final analysis. We created a set of succinct evidence-based recommendations on each topic that should inform clinical decisions by otolaryngologists, dive medicine specialists and primary care providers when faced with diving-related patient pathology.

PMID: 28641322 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBuEoQ
via IFTTT

SOX2 is required for inner ear neurogenesis.

Related Articles

SOX2 is required for inner ear neurogenesis.

Sci Rep. 2017 Jun 22;7(1):4086

Authors: Steevens AR, Sookiasian DL, Glatzer JC, Kiernan AE

Abstract
Neurons of the cochleovestibular ganglion (CVG) transmit hearing and balance information to the brain. During development, a select population of early otic progenitors express NEUROG1, delaminate from the otocyst, and coalesce to form the neurons that innervate all inner ear sensory regions. At present, the selection process that determines which otic progenitors activate NEUROG1 and adopt a neuroblast fate is incompletely understood. The transcription factor SOX2 has been implicated in otic neurogenesis, but its requirement in the specification of the CVG neurons has not been established. Here we tested SOX2's requirement during inner ear neuronal specification using a conditional deletion paradigm in the mouse. SOX2 deficiency at otocyst stages caused a near-absence of NEUROG1-expressing neuroblasts, increased cell death in the neurosensory epithelium, and significantly reduced the CVG volume. Interestingly, a milder decrease in neurogenesis was observed in heterozygotes, indicating SOX2 levels are important. Moreover, fate-mapping experiments revealed that the timing of SOX2 expression did not parallel the established vestibular-then-auditory sequence. These results demonstrate that SOX2 is required for the initial events in otic neuronal specification including expression of NEUROG1, although fate-mapping results suggest SOX2 may be required as a competence factor rather than a direct initiator of the neural fate.

PMID: 28642583 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBuFsU
via IFTTT

Scuba diving and otology: a systematic review with recommendations on diagnosis, treatment and post-operative care.

Related Articles

Scuba diving and otology: a systematic review with recommendations on diagnosis, treatment and post-operative care.

Diving Hyperb Med. 2017 Jun;47(2):97-109

Authors: Livingstone DM, Smith KA, Lange B

Abstract
Scuba diving is a popular recreational and professional activity with inherent risks. Complications related to barotrauma and decompression illness can pose significant morbidity to a diver's hearing and balance systems. The majority of dive-related injuries affect the head and neck, particularly the outer, middle and inner ear. Given the high incidence of otologic complications from diving, an evidence-based approach to the diagnosis and treatment of otic pathology is a necessity. We performed a systematic and comprehensive literature review including the pathophysiology, diagnosis, and treatment of otologic pathology related to diving. This included inner, middle, and outer ear anatomic subsites, as well as facial nerve complications, mal de debarquement syndrome, sea sickness and fitness to dive recommendations following otologic surgery. Sixty-two papers on diving and otologic pathology were included in the final analysis. We created a set of succinct evidence-based recommendations on each topic that should inform clinical decisions by otolaryngologists, dive medicine specialists and primary care providers when faced with diving-related patient pathology.

PMID: 28641322 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBuEoQ
via IFTTT

Self-Adjustment of Upper Electrical Stimulation Levels in CI Programming and the Effect on Auditory Functioning

imageObjectives: With current cochlear implants (CIs), CI recipients achieve good speech perception in quiet surroundings. However, in acoustically complex, real-life environments, speech comprehension remains difficult and sound quality often remains poor. It is, therefore, a challenge to program CIs for such environments in a clinic. The CI manufacturer Cochlear Ltd. recently introduced a remote control that enables CI recipients to alter the upper stimulation levels of their user programs themselves. In this concept, called remote assistant fitting (RAF), bass and treble controls can be adjusted by applying a tilt to emphasize either the low- or high-frequency C-levels, respectively. This concept of self-programming may be able to overcome limitations associated with fine-tuning the CI sound processor in a clinic. The aim of this study was to investigate to what extent CI recipients already accustomed to their clinically fitted program would adjust the settings in daily life if able to do so. Additionally, we studied the effects of these changes on auditory functioning in terms of speech intelligibility (in quiet and in noise), noise tolerance, and subjectively perceived speech perception and sound quality. Design: Twenty-two experienced adult CI recipients (implant use >12 months) participated in this prospective clinical study, which used a within-subject repeated measures design. All participants had phoneme scores of ≥70% at 65 dB SPL in quiet conditions, and all used a Cochlear Nucleus CP810 sound processor. Auditory performance was tested by a speech-in-quiet test, a speech-in-noise test, an acceptable noise level test, and a questionnaire about perceived auditory functioning, that is, a speech and sound quality (SSQ-C) questionnaire. The first session consisted of a baseline test in which the participants used their own CI program and were instructed on how to use RAF. After the first session, participants used RAF for 3 weeks at home. After these 3 weeks, the participants returned to the clinic for auditory functioning tests with their self-adjusted programs and completed the SSQ-C. Results: Fifteen participants (68%) adjusted their C-level frequency profile by more than 5 clinical levels for at least one electrode. Seven participants preferred a higher contribution of the high frequencies relative to the low frequencies, while five participants preferred more low-frequency stimulation. One-third of the participants adjusted the high and low frequencies equally, while some participants mainly used the overall volume to change their settings. Several parts of the SSQ-C questionnaire scores showed an improvement in perceived auditory functioning after the subjects used RAF. No significant change was found on the auditory functioning tests for speech-in-quiet, speech-in-noise, or acceptable noise level. Conclusions: In conclusion, the majority of experienced CI users made modest changes in the settings of their programs in various ways and were able to do so with the RAF. After altering the programs, the participants experienced an improvement in speech perception in quiet environments and improved perceived sound quality without compromising auditory performance. Therefore, it can be concluded that self-adjustment of CI settings is a useful and clinically applicable tool that may help CI recipients to improve perceived sound quality in their daily lives.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tF3USX
via IFTTT

Assessing Sensorineural Hearing Loss Using Various Transient-Evoked Otoacoustic Emission Stimulus Conditions

imageObjectives: An important clinical application of transient-evoked otoacoustic emissions (TEOAEs) is to evaluate cochlear outer hair cell function for the purpose of detecting sensorineural hearing loss (SNHL). Double-evoked TEOAEs were measured using a chirp stimulus, in which the stimuli had an extended frequency range compared to clinical tests. The present study compared TEOAEs recorded using an unweighted stimulus presented at either ambient pressure or tympanometric peak pressure (TPP) in the ear canal and TEOAEs recorded using a power-weighted stimulus at ambient pressure. The unweighted stimulus had approximately constant incident pressure magnitude across frequency, and the power-weighted stimulus had approximately constant absorbed sound power across frequency. The objective of this study was to compare TEOAEs from 0.79 to 8 kHz using these three stimulus conditions in adults to assess test performance in classifying ears as having either normal hearing or SNHL. Design: Measurements were completed on 87 adult participants. Eligible participants had either normal hearing (N = 40; M F = 16 24; mean age = 30 years) or SNHL (N = 47; M F = 20 27; mean age = 58 years), and normal middle ear function as defined by standard clinical criteria for 226-Hz tympanometry. Clinical audiometry, immittance, and an experimental wideband test battery, which included reflectance and TEOAE tests presented for 1-min durations, were completed for each ear on all participants. All tests were then repeated 1 to 2 months later. TEOAEs were measured by presenting the stimulus in the three stimulus conditions. TEOAE data were analyzed in each hearing group in terms of the half-octave-averaged signal to noise ratio (SNR) and the coherence synchrony measure (CSM) at frequencies between 1 and 8 kHz. The test–retest reliability of these measures was calculated. The area under the receiver operating characteristic curve (AUC) was measured at audiometric frequencies between 1 and 8 kHz to determine TEOAE test performance in distinguishing SNHL from normal hearing. Results: Mean TEOAE SNR was ≥8.7 dB for normal-hearing ears and ≤6 dB for SNHL ears for all three stimulus conditions across all frequencies. Mean test–retest reliability of TEOAE SNR was ≤4.3 dB for both hearing groups across all frequencies, although it was generally less (≤3.5 dB) for lower frequencies (1 to 4 kHz). AUCs were between 0.85 and 0.94 for all three TEOAE conditions at all frequencies, except for the ambient TEOAE condition at 2 kHz (0.82) and for all TEOAE conditions at 5.7 kHz with AUCs between 0.78 and 0.81. Power-weighted TEOAE AUCs were significantly higher (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFeDNr
via IFTTT

Health-Related Quality of Life Among Young Children With Cochlear Implants and Developmental Disabilities

imageObjective: The present study examined differences in health-related quality of life (HRQoL) between deaf children with cochlear implants (CI) with and without developmental disabilities (DD) and differences across HRQoL domains within both groups of children. Methods: Ninety-two parents of children with CI aged 3–7 years participated in this cross-sectional study. Of these children, 43 had DD (i.e., CI-DD group) and 49 had no DD or chronic illness, demonstrating overall typical development (i.e., CI-TD group). Parents of children in both groups completed the KINDLR, a generic HRQoL questionnaire. Parents also provided anecdotal comments to open-ended questions, and parent comments were evaluated on a CI benefits scale to assess parent-perceived benefits of CI for the deaf children with and without disabilities. Results: Children in the CI-DD group had significantly lower HRQoL compared to children in the CI-TD group, including lower scores on the self-esteem, friend, school, and family HRQoL subscales. No significant differences among groups were found on the physical well-being and emotional well-being subscales. For the CI-TD group, age at implantation correlated negatively with self-esteem and school HRQoL subscales. In the CI-DD group, children’s current age correlated negatively with family and with the total HRQoL scores. Parent anecdotal comments and scores on the CI-benefits scale indicated strong parent perceptions of benefits of implantation for children in both groups. Conclusion: Based on parents’ proxy report, findings suggest that having DD affects multiple domains of HRQoL among young children with CIs above and beyond that of the CI itself. Parents of deaf children with DD may need greater support through the CI process and follow-up than parents of deaf children without DD.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFuNGA
via IFTTT

Comparison of Multipole Stimulus Configurations With Respect to Loudness and Spread of Excitation

imageObjective: Current spread is a substantial limitation of speech coding strategies in cochlear implants. Multipoles have the potential to reduce current spread and thus generate more discriminable pitch percepts. The difficulty with multipoles is reaching sufficient loudness. The primary goal was to compare the loudness characteristics and spread of excitation (SOE) of three types of phased array stimulation, a novel multipole, with three more conventional configurations. Design: Fifteen postlingually deafened cochlear implant users performed psychophysical experiments addressing SOE, loudness scaling, loudness threshold, loudness balancing, and loudness discrimination. Partial tripolar stimulation (pTP, σ = 0.75), TP, phased array with 16 (PA16) electrodes, and restricted phased array with five (PA5) and three (PA3) electrodes was compared with a reference monopolar stimulus. Results: Despite a similar loudness growth function, there were considerable differences in current expenditure. The most energy efficient multipole was the pTP, followed by PA16 and PA5/PA3. TP clearly stood out as the least efficient one. Although the electric dynamic range was larger with multipolar configurations, the number of discriminable steps in loudness was not significantly increased. The SOE experiment could not demonstrate any difference between the stimulation strategies. Conclusions: The loudness characteristics all five multipolar configurations tested are similar. Because of their higher energy efficiency, pTP and PA16 are the most favorable candidates for future testing in clinical speech coding strategies.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFgWjk
via IFTTT

Using Neural Response Telemetry to Monitor Physiological Responses to Acoustic Stimulation in Hybrid Cochlear Implant Users

imageObjective: This report describes the results of a series of experiments where we use the neural response telemetry (NRT) system of the Nucleus cochlear implant (CI) to measure the response of the peripheral auditory system to acoustic stimulation in Nucleus Hybrid CI users. The objectives of this study were to determine whether they could separate responses from hair cells and neurons and to evaluate the stability of these measures over time. Design: Forty-four CI users participated. They all had residual acoustic hearing and used a Nucleus Hybrid S8, S12, or L24 CI or the standard lateral wall CI422 implant. The NRT system of the CI was used to trigger an acoustic stimulus (500-Hz tone burst or click), which was presented at a low stimulation rate (10, 15, or 50 per second) to the implanted ear via an insert earphone and to record the cochlear microphonic, the auditory nerve neurophonic and the compound action potential (CAP) from an apical intracochlear electrode. To record acoustically evoked responses, a longer time window than is available with the commercial NRT software is required. This limitation was circumvented by making multiple recordings for each stimulus using different time delays between the onset of stimulation and the onset of averaging. These recordings were then concatenated off-line. Matched recordings elicited using positive and negative polarity stimuli were added off-line to emphasize neural potentials (SUM) and subtracted off-line to emphasize potentials primarily generated by cochlear hair cells (DIF). These assumptions regarding the origin of the SUM and DIF components were tested by comparing the magnitude of these derived responses recorded using various stimulation rates. Magnitudes of the SUM and DIF components were compared with each other and with behavioral thresholds. Results: SUM and DIF components were identified for most subjects, consistent with both hair cell and neural responses to acoustic stimulation. For a subset of the study participants, the DIF components grew as stimulus level was increased, but little or no SUM components were identified. Latency of the CAPs in response to click stimuli was long relative to reports in the literature of recordings obtained using extracochlear electrodes. This difference in response latency and general morphology of the CAPs recorded was likely due to differences across subjects in hearing loss configuration. The use of high stimulation rates tended to decrease SUM and CAP components more than DIF components. We suggest this effect reflects neural adaptation. In some individuals, repeated measures were made over intervals as long as 9 months. Changes over time in DIF, SUM, and CAP thresholds mirrored changes in audiometric threshold for the subjects who experienced loss of acoustic hearing in the implanted ear. Conclusions: The Nucleus NRT software can be used to record peripheral responses to acoustic stimulation at threshold and suprathreshold levels, providing a window into the status of the auditory hair cells and the primary afferent nerve fibers. These acoustically evoked responses are sensitive to changes in hearing status and consequently could be useful in characterizing the specific pathophysiology of the hearing loss experienced by this population of CI users.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFj4HZ
via IFTTT

Objective Identification of Simulated Cochlear Implant Settings in Normal-Hearing Listeners Via Auditory Cortical Evoked Potentials

imageObjectives: Providing cochlear implant (CI) patients the optimal signal processing settings during mapping sessions is critical for facilitating their speech perception. Here, we aimed to evaluate whether auditory cortical event-related potentials (ERPs) could be used to objectively determine optimal CI parameters. Design: While recording neuroelectric potentials, we presented a set of acoustically vocoded consonants (aKa, aSHa, and aNa) to normal-hearing listeners (n = 12) that simulated speech tokens processed through four different combinations of CI stimulation rate and number of spectral maxima. Parameter settings were selected to feature relatively fast/slow stimulation rates and high/low number of maxima; 1800 pps/20 maxima, 1800/8, 500/20 and 500/8. Results: Speech identification and reaction times did not differ with changes in either the number of maxima or stimulation rate indicating ceiling behavioral performance. Similarly, we found that conventional univariate analysis (analysis of variance) of N1 and P2 amplitude/latency failed to reveal strong modulations across CI-processed speech conditions. In contrast, multivariate discriminant analysis based on a combination of neural measures was used to create “neural confusion matrices” and identified a unique parameter set (1800/8) that maximally differentiated speech tokens at the neural level. This finding was corroborated by information transfer analysis which confirmed these settings optimally transmitted information in listeners’ neural and perceptual responses. Conclusions: Translated to actual implant patients, our findings suggest that scalp-recorded ERPs might be useful in determining optimal signal processing settings from among a closed set of parameter options and aid in the objective fitting of CI devices.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFuN9y
via IFTTT

Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants

imageObjectives: The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Design: Thirteen children (mean age ± SD = 10 ± 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from −90° azimuth to +90° azimuth (15° interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0°), from one of the two side speakers (+90° or −90°), from both side speakers (±90°). Speech, spatial, and quality questionnaire was administered. Results: When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Conclusions: Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFwS51
via IFTTT

Estimation of Minor Conductive Hearing Loss in Humans Using Distortion Product Otoacoustic Emissions

imageObjectives: Conductive hearing loss (CHL) systematically alters distortion product otoacoustic emission (DPOAE) levels through attenuation of both the primary tones and the evoked response by the middle ear, as well as through modification of the effective L1–L2 relationship within the cochlea. It has been postulated that, if optimal primary tone level relationships for an ear without CHL are known or can be estimated accurately and a CHL can be presumed to attenuate both primary tones to a similar extent, the adjustment to L1 required to restore an optimal L1–L2 separation following CHL induction can be utilized to estimate CHL magnitude objectively. The primary aim of this study was to assess the feasibility of objectively estimating experimentally produced CHL in humans by comparing CHL estimates resulting from DPOAE- and pure-tone audiometry-based methods. A secondary aim was to compare the accuracy of DPOAE-based CHL estimates when obtained using generic, as opposed to ear-specific, optimal primary tone level formula parameters. Design: For a single ear of 30 adults with normal hearing, auditory threshold for a 1 kHz tone was obtained using automated Békésy audiometry at an ear-canal pressure of 0 daPa, as well as at a negative pressure sufficient for increasing threshold by 3 to 10 dB. The difference in threshold for the ear-canal pressure conditions was defined as the pure-tone audiometry-based estimate of CHL (CHLPT). For the same two ear-canal pressures, optimal DPOAE primary tone level relationships were identified for f2 = 1 kHz. Specifically, for 20 ≤ L2 ≤ 70 dB SPL, L1 was varied 15 dB above and below the recommendation of L1 = 0.49 L2 + 41 (dB SPL). The difference between the optimal L1–L2 relationships for the two pressure conditions was defined as ΔL1OPT. A DPOAE-based estimate of CHL (CHLDP) was obtained using the formula CHLDP = ΔL1OPT/(1 − a), where a represents the slope of the optimal L1–L2 relationship observed in the absence of CHL. Results: A highly significant linear dependence was identified between pure-tone audiometry- and DPOAE-based estimates of CHL, r(19) = 0.71, p

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFwPWT
via IFTTT

A Longitudinal Investigation of the Home Literacy Environment and Shared Book Reading in Young Children With Hearing Loss

imageObjectives: The principle goal of this longitudinal study was to examine parent perceptions of home literacy environment (e.g., frequency of book reading, ease of book reading with child) and observed behaviors during shared book reading (SBR) interactions between parents and their children with hearing loss (HL) as compared with parents and their children with normal hearing (NH) across 3 time points (12, 24, and 36 months old). Relationships were also explored among home literacy environment factors and SBR behaviors and later language outcomes, across all three time points for parents of children with and without HL. Design: Participants were a group of parents and their children with HL (N = 17) and typically developing children with NH (N = 34). Parent perceptions about the home literacy environment were captured through a questionnaire. Observed parent behaviors and their use of facilitative language techniques were coded during videotaped SBR interactions. Children’s oral language skills were assessed using a standardized language measure at each time point. Results: No significant differences emerged between groups of parents (HL and NH) in terms of perceived home literacy environment at 12 and 36 months. However, significant group differences were evident for parent perceived ease of reading to their child at 24 months. Group differences also emerged for parental SBR behaviors for literacy strategies and interactive reading at 12 months and for engagement and interactive reading at 36 months, with parents of children with HL scoring lower in all factors. No significant relationships emerged between early home literacy factors and SBR behaviors at 12 months and oral language skills at 36 months for parents of children with NH. However, significant positive relationships were evident between early home literacy environment factors at 12 months and oral language skills at 36 months for parents and their children with HL. Conclusions: Although both groups of parents increased their frequency of SBR behaviors over time, parents of children with HL may need additional support to optimize SBR experiences to better guide their toddlers’ and preschoolers’ language skills. Early intervention efforts that focus on SBR interactions that are mutually enjoyed and incorporate specific ways to encourage parent–child conversations will be essential as children with HL acquire language.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tF3NXJ
via IFTTT

Infants’ and Adults’ Use of Temporal Cues in Consonant Discrimination

imageObjectives: Adults can use slow temporal envelope cues, or amplitude modulation (AM), to identify speech sounds in quiet. Faster AM cues and the temporal fine structure, or frequency modulation (FM), play a more important role in noise. This study assessed whether fast and slow temporal modulation cues play a similar role in infants’ speech perception by comparing the ability of normal-hearing 3-month-olds and adults to use slow temporal envelope cues in discriminating consonants contrasts. Design: English consonant–vowel syllables differing in voicing or place of articulation were processed by 2 tone-excited vocoders to replace the original FM cues with pure tones in 32 frequency bands. AM cues were extracted in each frequency band with 2 different cutoff frequencies, 256 or 8 Hz. Discrimination was assessed for infants and adults using an observer-based testing method, in quiet or in a speech-shaped noise. Results: For infants, the effect of eliminating fast AM cues was the same in quiet and in noise: a high proportion of infants discriminated when both fast and slow AM cues were available, but less than half of the infants also discriminated when only slow AM cues were preserved. For adults, the effect of eliminating fast AM cues was greater in noise than in quiet: All adults discriminated in quiet whether or not fast AM cues were available, but in noise eliminating fast AM cues reduced the percentage of adults reaching criterion from 71 to 21%. Conclusions: In quiet, infants seem to depend on fast AM cues more than adults do. In noise, adults seem to depend on FM cues to a greater extent than infants do. However, infants and adults are similarly affected by a loss of fast AM cues in noise. Experience with the native language seems to change the relative importance of different acoustic cues for speech perception.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFr0ZD
via IFTTT

Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants

imageObjectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFbAVj
via IFTTT

Brainstem Evoked Potential Indices of Subcortical Auditory Processing After Mild Traumatic Brain Injury

imageObjectives: The primary aim of this study was to assess subcortical auditory processing in individuals with chronic symptoms after mild traumatic brain injury (mTBI) by measuring auditory brainstem responses (ABRs) to standard click and complex speech stimuli. Consistent with reports in the literature of auditory problems after mTBI (despite normal-hearing thresholds), it was hypothesized that individuals with mTBI would have evidence of impaired neural encoding in the auditory brainstem compared to noninjured controls, as evidenced by delayed latencies and reduced amplitudes of ABR components. We further hypothesized that the speech-evoked ABR would be more sensitive than the click-evoked ABR to group differences because of its complex nature, particularly when recorded in a background noise condition. Design: Click- and speech-ABRs were collected in 32 individuals diagnosed with mTBI in the past 3 to 18 months. All mTBI participants were experiencing ongoing injury symptoms for which they were seeking rehabilitation through a brain injury rehabilitation management program. The same data were collected in a group of 32 age- and gender-matched controls with no history of head injury. ABRs were recorded in both left and right ears for all participants in all conditions. Speech-ABRs were collected in both quiet and in a background of continuous 20-talker babble ipsilateral noise. Peak latencies and amplitudes were compared between groups and across subgroups of mTBI participants categorized by their behavioral auditory test performance. Results: Click-ABR results were not significantly different between the mTBI and control groups. However, when comparing the control group to only those mTBI subjects with measurably decreased performance on auditory behavioral tests, small differences emerged, including delayed latencies for waves I, III, and V. Similarly, few significant group differences were observed for peak amplitudes and latencies of the speech-ABR when comparing at the whole group level but were again observed between controls and those mTBI subjects with abnormal behavioral auditory test performance. These differences were seen for the onset portions of the speech-ABR waveforms in quiet and were close to significant for the onset wave. Across groups, quiet versus noise comparisons were significant for most speech-ABR measures but the noise condition did not reveal more group differences than speech-ABR in quiet, likely because of variability and overall small amplitudes in this condition for both groups. Conclusions: The outcomes of this study indicate that subcortical neural encoding of auditory information is affected in a significant portion of individuals with long-term problems after mTBI. These subcortical differences appear to relate to performance on tests of auditory processing and perception, even in the absence of significant hearing loss on the audiogram. While confounds of age and slight differences in audiometric thresholds cannot be ruled out, these preliminary results are consistent with the idea that mTBI can result in neuronal changes within the subcortical auditory pathway that appear to relate to functional auditory outcomes. Although further research is needed, clinical audiological evaluation of individuals with ongoing post-mTBI symptoms is warranted for identification of individuals who may benefit from auditory rehabilitation as part of their overall treatment plan.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFgW2O
via IFTTT

Speech Intelligibility as a Cue for Acceptable Noise Levels

imageObjectives: The goal of this study was to examine whether individuals are using speech intelligibility to determine how much noise they are willing to accept while listening to running speech. Previous research has shown that the amount of background noise that an individual is willing to accept while listening to speech is predictive of his or her likelihood of success with hearing aids. If it were possible to determine the criterion by which individuals make this judgment, then it may be possible to alter this cue, especially for those who are unlikely to be successful with hearing aids, and thereby improve their chances of success with hearing aids. Design: Twenty-one individuals with normal hearing and 21 with sensorineural hearing loss participated in this study. In each group, there were 7 with a low, moderate, and high acceptance of background noise, as determined by the Acceptable Noise Level (ANL) test. (During the ANL test, listeners adjusted speech to their most comfortable listening level, then background noise was added, and they adjusted it to the maximum level that they were “willing to put up with” while listening to the speech.) Participants also performed a modified version of the ANL test in which the speech was fixed at four different levels (50, 63, 75, and 88 dBA), and they adjusted only the level of the background noise. The authors calculated speech intelligibility index (SII) scores for each participant and test level. SII scores ranged from 0 (no speech information is present) to 1 (100% of the speech information is present). The authors considered a participant’s results to be consistent with a speech intelligibility-based listening criterion if his or her SIIs remained constant across all of the test conditions. Results: For all but one of the participants with normal hearing, their SIIs remained constant across the entire 38-dB range of speech levels. For all participants with hearing loss, the SII increased with speech level. Conclusions: For most listeners with normal hearing, their ANLs were consistent with the use of speech intelligibility as a listening cue; for listeners with hearing impairment, they were not. Future studies should determine what cues these individuals are using when selecting an ANL. Having a better understanding of these cues may help audiologists design and optimize treatment options for their patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/2tFsyTk
via IFTTT

Early Hearing Detection and Intervention-Pediatric Audiology Links to Services EHDI-PALS: Building a National Facility Database

imageObjectives: To create a searchable web-based national audiology facility directory using a standardized survey, so parents and providers could identify which facilities had capacity to provide appropriate services based on child’s age. Design: An Early Hearing Detection and Intervention-Pediatric Audiology Links to Services expert panel was convened to create a survey to collect audiology facility information. Professional practice documents were reviewed, a survey was designed to collect pertinent test protocols of each audiology facility, and a standard of care template was created to cross-check survey answers. Audiology facility information across the United States was collected and compiled into a directory structured and displayed in an interactive website, ehdipals.org. Results: Since November 7, 2012, to May 21, 2016, over 1000 facilities have completed the survey and become listed in the Early Hearing Detection and Intervention-Pediatric Audiology Links to Services directory. The site has registered 10,759 unique visitors, 151,981 page views, and 9134 unique searches from consumers. User feedback has been positive overall. Conclusion: A searchable, web-based facility directory has proven useful to consumers as a tool to help them differentiate whether a facility was set up to test newborns versus young children. Use of a preprogrammed standard of practice template to cross-check survey answers was also shown to be a practical aid.

from #Audiology via ola Kala on Inoreader http://ift.tt/2t6cGws
via IFTTT

Expansion of Prosodic Abilities at the Transition From Babble to Words: A Comparison Between Children With Cochlear Implants and Normally Hearing Children

imageObjectives: This longitudinal study examined the effect of emerging vocabulary production on the ability to produce the phonetic cues to prosodic prominence in babbled and lexical disyllables of infants with cochlear implants (CI) and normally hearing (NH) infants. Current research on typical language acquisition emphasizes the importance of vocabulary development for phonological and phonetic acquisition. Children with CI experience significant difficulties with the perception and production of prosody, and the role of possible top-down effects is, therefore, particularly relevant for this population. Design: Isolated disyllabic babble and first words were identified and segmented in longitudinal audio–video recordings and transcriptions for nine NH infants and nine infants with CI interacting with their parents. Monthly recordings were included from the onset of babbling until children had reached a cumulative vocabulary of 200 words. Three cues to prosodic prominence, fundamental frequency (f0), intensity, and duration, were measured in the vocalic portions of stand-alone disyllables. To represent the degree of prosodic differentiation between two syllables in an utterance, the raw values for intensity and duration were transformed to ratios, and for f0, a measure of the perceptual distance in semitones was derived. The degree of prosodic differentiation for disyllabic babble and words for each cue was compared between groups. In addition, group and individual tendencies on the types of stress patterns for babble and words were also examined. Results: The CI group had overall smaller pitch and intensity distances than the NH group. For the NH group, words had greater pitch and intensity distances than babbled disyllables. Especially for pitch distance, this was accompanied by a shift toward a more clearly expressed stress pattern that reflected the influence of the ambient language. For the CI group, the same expansion in words did not take place for pitch. For intensity, the CI group gave evidence of some increase of prosodic differentiation. The results for the duration measure showed evidence of utterance final lengthening in both groups. In words, the CI group significantly reduced durational differences between syllables so that a more even-timed, less differentiated pattern emerged. Conclusions: The onset of vocabulary production did not have the same facilitatory effect for the CI infants on the production of phonetic cues for prosody, especially for pitch. It was argued that the results for duration may reflect greater articulatory difficulties in words for the CI group than the NH group. It was suggested that the lack of clear top-down effects of the vocabulary in the CI group may be because of a lag in development caused by an initial lack of auditory stimulation, possibly compounded by the absence of auditory feedback during the babble phase.

from #Audiology via ola Kala on Inoreader http://ift.tt/2t5WGub
via IFTTT

The Impact of a Cochlear Implant Electrode Array on the Middle Ear Transfer Function

imageObjectives: As a treatment for partial deafness with residual hearing in the lower frequency range, the combined acoustic and electric stimulation of the cochlea has become widespread. Acoustic stimulation is provided by a hearing aid’s airborne sound and the electric stimulation by a cochlear implant electrode array, which may be inserted through the round window or a cochleostomy. To take advantage of that concept, it is essential to preserve residual hearing after surgery. Therefore, the intracochlear electrode array should not compromise the middle ear vibration transmission. This study investigates the influence of different electrode types and insertion paths on the middle ear transfer function and the inner ear fluid dynamics. Design: Sound-induced oval and round window net volume velocities were calculated from vibration measurements with laser vibrometers on six nonfixated human temporal bones. After baseline measurements in the “natural” condition, a cochleostomy was drilled and closed with connective tissue. Then, four different electrode arrays were inserted through the cochleostomy. Afterwards, they were inserted through the round window while the cochleostomy was patched again with connective tissue. Results: After having drilled a cochleostomy and electrode insertion, no systematic trends in the changes of oval and round window volume velocities were observed. Nearly all changes of middle ear transfer functions, as well as oval and round window volume velocity ratios, were statistically insignificant. Conclusions: Intracochlear electrode arrays do not significantly increase cochlear input impedance immediately after insertion. Any changes that may occur seem to be independent of electrode array type and insertion path.

from #Audiology via ola Kala on Inoreader http://ift.tt/2t5WECz
via IFTTT

Difficult conversations: talking about cost in audiology consultations with older adults.

Difficult conversations: talking about cost in audiology consultations with older adults.

Int J Audiol. 2017 Jun 23;:1-8

Authors: Ekberg K, Barr C, Hickson L

Abstract
OBJECTIVE: Financial cost is a barrier for many older adults in their decision to obtain hearing aids (HAs). This study aimed to examine conversations about the cost of HAs in detail within initial audiology appointments.
DESIGN: Sixty-two initial audiology appointments were video-recorded. The data were analysed using conversation analysis.
STUDY SAMPLE: Participants included 26 audiologists, 62 older adults and 17 companions.
RESULTS: Audiologists and clients displayed interactional difficulty during conversations about cost. Clients often had emotional responses to the cost of HAs, which were not attended to by audiologists. It was typical for audiologists to present one HA cost option at a time, which led to multiple rejections from clients which made the interactions difficult. Alternatively, when audiologists offered multiple cost options at once this led to a smoother interaction.
CONCLUSIONS: Audiologists and clients were observed to have difficulty talking about HA costs. Offering clients multiple HA cost options at the same time can engage clients in the decision-making process and lead to a smoother interaction between audiologist and client in the management phase of appointments.

PMID: 28643531 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2s6PLvS
via IFTTT

Cochrane corner - a new IJA feature.

Related Articles

Cochrane corner - a new IJA feature.

Int J Audiol. 2017 Jun 22;:1

Authors: Roeser RJ

PMID: 28639880 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBKK1L
via IFTTT

Prevalence and associated factors of hearing aid use among older adults in Chile.

Related Articles

Prevalence and associated factors of hearing aid use among older adults in Chile.

Int J Audiol. 2017 Jun 22;:1-9

Authors: Fuentes-López E, Fuente A, Cardemil F, Valdivia G, Albala C

Abstract
OBJECTIVE: The aim of this study was to determine the prevalence of use of hearing aids by older adults in Chile and the influence of some variables such as education level, income level and geographic area of residence on the prevalence of hearing aids.
DESIGN: A national cross-sectional survey which was carried out in 2009.
STUDY SAMPLE: A representative sample of 4766 Chilean older adults aged 60 years and above.
RESULTS: The percentage of older adults in Chile who self-reported hearing problems and used hearing aids was 8.9%. Such prevalence increased for adults living in urban areas and for those who knew about the new Chilean programme of universal access to health services (AUGE). For older adults who did not know about this programme, significant associations between the use of hearing aids and the variables of age, geographic area of residence, and income level were found.
CONCLUSIONS: People's knowledge about AUGE programme may positively influence the use of hearing aids, although a direct effect cannot be attributed.

PMID: 28639872 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBA0QX
via IFTTT

Characteristics of children with unilateral hearing loss.

Related Articles

Characteristics of children with unilateral hearing loss.

Int J Audiol. 2017 Jun 22;:1-10

Authors: Fitzpatrick EM, Al-Essa RS, Whittingham J, Fitzpatrick J

Abstract
OBJECTIVE: The purpose of this study was to describe the clinical characteristics of children with unilateral hearing loss (UHL), examine deterioration in hearing, and explore amplification decisions.
DESIGN: Population-based data were collected prospectively from time of diagnosis. Serial audiograms and amplification details were retrospectively extracted from clinical charts to document the trajectory and management of hearing loss.
SAMPLE: The study included all children identified with UHL in one region of Canada over a 13-year period (2003-2015) after implementation of universal newborn hearing screening.
RESULTS: Of 537 children with permanent hearing loss, 20.1% (108) presented with UHL at diagnosis. They were identified at a median age of 13.9 months (IQR: 2.8, 49.0). Children with congenital loss were identified at 2.8 months (IQR: 2.0, 3.6) and made up 47.2% (n = 51), reflecting that a substantial portion had late-onset, acquired or late-identified loss. A total of 42.4% (n = 39) showed deterioration in hearing, including 16 (17.4%) who developed bilateral loss. By study end, 73.1% (79/108) of children had received amplification recommendations.
CONCLUSIONS: Up to 20% of children with permanent HL are first diagnosed with UHL. About 40% are at risk for deterioration in hearing either in the impaired ear and/or in the normal hearing ear.

PMID: 28639843 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2sBTW6c
via IFTTT

Self-Adjustment of Upper Electrical Stimulation Levels in CI Programming and the Effect on Auditory Functioning

imageObjectives: With current cochlear implants (CIs), CI recipients achieve good speech perception in quiet surroundings. However, in acoustically complex, real-life environments, speech comprehension remains difficult and sound quality often remains poor. It is, therefore, a challenge to program CIs for such environments in a clinic. The CI manufacturer Cochlear Ltd. recently introduced a remote control that enables CI recipients to alter the upper stimulation levels of their user programs themselves. In this concept, called remote assistant fitting (RAF), bass and treble controls can be adjusted by applying a tilt to emphasize either the low- or high-frequency C-levels, respectively. This concept of self-programming may be able to overcome limitations associated with fine-tuning the CI sound processor in a clinic. The aim of this study was to investigate to what extent CI recipients already accustomed to their clinically fitted program would adjust the settings in daily life if able to do so. Additionally, we studied the effects of these changes on auditory functioning in terms of speech intelligibility (in quiet and in noise), noise tolerance, and subjectively perceived speech perception and sound quality. Design: Twenty-two experienced adult CI recipients (implant use >12 months) participated in this prospective clinical study, which used a within-subject repeated measures design. All participants had phoneme scores of ≥70% at 65 dB SPL in quiet conditions, and all used a Cochlear Nucleus CP810 sound processor. Auditory performance was tested by a speech-in-quiet test, a speech-in-noise test, an acceptable noise level test, and a questionnaire about perceived auditory functioning, that is, a speech and sound quality (SSQ-C) questionnaire. The first session consisted of a baseline test in which the participants used their own CI program and were instructed on how to use RAF. After the first session, participants used RAF for 3 weeks at home. After these 3 weeks, the participants returned to the clinic for auditory functioning tests with their self-adjusted programs and completed the SSQ-C. Results: Fifteen participants (68%) adjusted their C-level frequency profile by more than 5 clinical levels for at least one electrode. Seven participants preferred a higher contribution of the high frequencies relative to the low frequencies, while five participants preferred more low-frequency stimulation. One-third of the participants adjusted the high and low frequencies equally, while some participants mainly used the overall volume to change their settings. Several parts of the SSQ-C questionnaire scores showed an improvement in perceived auditory functioning after the subjects used RAF. No significant change was found on the auditory functioning tests for speech-in-quiet, speech-in-noise, or acceptable noise level. Conclusions: In conclusion, the majority of experienced CI users made modest changes in the settings of their programs in various ways and were able to do so with the RAF. After altering the programs, the participants experienced an improvement in speech perception in quiet environments and improved perceived sound quality without compromising auditory performance. Therefore, it can be concluded that self-adjustment of CI settings is a useful and clinically applicable tool that may help CI recipients to improve perceived sound quality in their daily lives.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tF3USX
via IFTTT

Assessing Sensorineural Hearing Loss Using Various Transient-Evoked Otoacoustic Emission Stimulus Conditions

imageObjectives: An important clinical application of transient-evoked otoacoustic emissions (TEOAEs) is to evaluate cochlear outer hair cell function for the purpose of detecting sensorineural hearing loss (SNHL). Double-evoked TEOAEs were measured using a chirp stimulus, in which the stimuli had an extended frequency range compared to clinical tests. The present study compared TEOAEs recorded using an unweighted stimulus presented at either ambient pressure or tympanometric peak pressure (TPP) in the ear canal and TEOAEs recorded using a power-weighted stimulus at ambient pressure. The unweighted stimulus had approximately constant incident pressure magnitude across frequency, and the power-weighted stimulus had approximately constant absorbed sound power across frequency. The objective of this study was to compare TEOAEs from 0.79 to 8 kHz using these three stimulus conditions in adults to assess test performance in classifying ears as having either normal hearing or SNHL. Design: Measurements were completed on 87 adult participants. Eligible participants had either normal hearing (N = 40; M F = 16 24; mean age = 30 years) or SNHL (N = 47; M F = 20 27; mean age = 58 years), and normal middle ear function as defined by standard clinical criteria for 226-Hz tympanometry. Clinical audiometry, immittance, and an experimental wideband test battery, which included reflectance and TEOAE tests presented for 1-min durations, were completed for each ear on all participants. All tests were then repeated 1 to 2 months later. TEOAEs were measured by presenting the stimulus in the three stimulus conditions. TEOAE data were analyzed in each hearing group in terms of the half-octave-averaged signal to noise ratio (SNR) and the coherence synchrony measure (CSM) at frequencies between 1 and 8 kHz. The test–retest reliability of these measures was calculated. The area under the receiver operating characteristic curve (AUC) was measured at audiometric frequencies between 1 and 8 kHz to determine TEOAE test performance in distinguishing SNHL from normal hearing. Results: Mean TEOAE SNR was ≥8.7 dB for normal-hearing ears and ≤6 dB for SNHL ears for all three stimulus conditions across all frequencies. Mean test–retest reliability of TEOAE SNR was ≤4.3 dB for both hearing groups across all frequencies, although it was generally less (≤3.5 dB) for lower frequencies (1 to 4 kHz). AUCs were between 0.85 and 0.94 for all three TEOAE conditions at all frequencies, except for the ambient TEOAE condition at 2 kHz (0.82) and for all TEOAE conditions at 5.7 kHz with AUCs between 0.78 and 0.81. Power-weighted TEOAE AUCs were significantly higher (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tFeDNr
via IFTTT

Health-Related Quality of Life Among Young Children With Cochlear Implants and Developmental Disabilities

imageObjective: The present study examined differences in health-related quality of life (HRQoL) between deaf children with cochlear implants (CI) with and without developmental disabilities (DD) and differences across HRQoL domains within both groups of children. Methods: Ninety-two parents of children with CI aged 3–7 years participated in this cross-sectional study. Of these children, 43 had DD (i.e., CI-DD group) and 49 had no DD or chronic illness, demonstrating overall typical development (i.e., CI-TD group). Parents of children in both groups completed the KINDLR, a generic HRQoL questionnaire. Parents also provided anecdotal comments to open-ended questions, and parent comments were evaluated on a CI benefits scale to assess parent-perceived benefits of CI for the deaf children with and without disabilities. Results: Children in the CI-DD group had significantly lower HRQoL compared to children in the CI-TD group, including lower scores on the self-esteem, friend, school, and family HRQoL subscales. No significant differences among groups were found on the physical well-being and emotional well-being subscales. For the CI-TD group, age at implantation correlated negatively with self-esteem and school HRQoL subscales. In the CI-DD group, children’s current age correlated negatively with family and with the total HRQoL scores. Parent anecdotal comments and scores on the CI-benefits scale indicated strong parent perceptions of benefits of implantation for children in both groups. Conclusion: Based on parents’ proxy report, findings suggest that having DD affects multiple domains of HRQoL among young children with CIs above and beyond that of the CI itself. Parents of deaf children with DD may need greater support through the CI process and follow-up than parents of deaf children without DD.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tFuNGA
via IFTTT

Comparison of Multipole Stimulus Configurations With Respect to Loudness and Spread of Excitation

imageObjective: Current spread is a substantial limitation of speech coding strategies in cochlear implants. Multipoles have the potential to reduce current spread and thus generate more discriminable pitch percepts. The difficulty with multipoles is reaching sufficient loudness. The primary goal was to compare the loudness characteristics and spread of excitation (SOE) of three types of phased array stimulation, a novel multipole, with three more conventional configurations. Design: Fifteen postlingually deafened cochlear implant users performed psychophysical experiments addressing SOE, loudness scaling, loudness threshold, loudness balancing, and loudness discrimination. Partial tripolar stimulation (pTP, σ = 0.75), TP, phased array with 16 (PA16) electrodes, and restricted phased array with five (PA5) and three (PA3) electrodes was compared with a reference monopolar stimulus. Results: Despite a similar loudness growth function, there were considerable differences in current expenditure. The most energy efficient multipole was the pTP, followed by PA16 and PA5/PA3. TP clearly stood out as the least efficient one. Although the electric dynamic range was larger with multipolar configurations, the number of discriminable steps in loudness was not significantly increased. The SOE experiment could not demonstrate any difference between the stimulation strategies. Conclusions: The loudness characteristics all five multipolar configurations tested are similar. Because of their higher energy efficiency, pTP and PA16 are the most favorable candidates for future testing in clinical speech coding strategies.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tFgWjk
via IFTTT

Using Neural Response Telemetry to Monitor Physiological Responses to Acoustic Stimulation in Hybrid Cochlear Implant Users

imageObjective: This report describes the results of a series of experiments where we use the neural response telemetry (NRT) system of the Nucleus cochlear implant (CI) to measure the response of the peripheral auditory system to acoustic stimulation in Nucleus Hybrid CI users. The objectives of this study were to determine whether they could separate responses from hair cells and neurons and to evaluate the stability of these measures over time. Design: Forty-four CI users participated. They all had residual acoustic hearing and used a Nucleus Hybrid S8, S12, or L24 CI or the standard lateral wall CI422 implant. The NRT system of the CI was used to trigger an acoustic stimulus (500-Hz tone burst or click), which was presented at a low stimulation rate (10, 15, or 50 per second) to the implanted ear via an insert earphone and to record the cochlear microphonic, the auditory nerve neurophonic and the compound action potential (CAP) from an apical intracochlear electrode. To record acoustically evoked responses, a longer time window than is available with the commercial NRT software is required. This limitation was circumvented by making multiple recordings for each stimulus using different time delays between the onset of stimulation and the onset of averaging. These recordings were then concatenated off-line. Matched recordings elicited using positive and negative polarity stimuli were added off-line to emphasize neural potentials (SUM) and subtracted off-line to emphasize potentials primarily generated by cochlear hair cells (DIF). These assumptions regarding the origin of the SUM and DIF components were tested by comparing the magnitude of these derived responses recorded using various stimulation rates. Magnitudes of the SUM and DIF components were compared with each other and with behavioral thresholds. Results: SUM and DIF components were identified for most subjects, consistent with both hair cell and neural responses to acoustic stimulation. For a subset of the study participants, the DIF components grew as stimulus level was increased, but little or no SUM components were identified. Latency of the CAPs in response to click stimuli was long relative to reports in the literature of recordings obtained using extracochlear electrodes. This difference in response latency and general morphology of the CAPs recorded was likely due to differences across subjects in hearing loss configuration. The use of high stimulation rates tended to decrease SUM and CAP components more than DIF components. We suggest this effect reflects neural adaptation. In some individuals, repeated measures were made over intervals as long as 9 months. Changes over time in DIF, SUM, and CAP thresholds mirrored changes in audiometric threshold for the subjects who experienced loss of acoustic hearing in the implanted ear. Conclusions: The Nucleus NRT software can be used to record peripheral responses to acoustic stimulation at threshold and suprathreshold levels, providing a window into the status of the auditory hair cells and the primary afferent nerve fibers. These acoustically evoked responses are sensitive to changes in hearing status and consequently could be useful in characterizing the specific pathophysiology of the hearing loss experienced by this population of CI users.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tFj4HZ
via IFTTT

Objective Identification of Simulated Cochlear Implant Settings in Normal-Hearing Listeners Via Auditory Cortical Evoked Potentials

imageObjectives: Providing cochlear implant (CI) patients the optimal signal processing settings during mapping sessions is critical for facilitating their speech perception. Here, we aimed to evaluate whether auditory cortical event-related potentials (ERPs) could be used to objectively determine optimal CI parameters. Design: While recording neuroelectric potentials, we presented a set of acoustically vocoded consonants (aKa, aSHa, and aNa) to normal-hearing listeners (n = 12) that simulated speech tokens processed through four different combinations of CI stimulation rate and number of spectral maxima. Parameter settings were selected to feature relatively fast/slow stimulation rates and high/low number of maxima; 1800 pps/20 maxima, 1800/8, 500/20 and 500/8. Results: Speech identification and reaction times did not differ with changes in either the number of maxima or stimulation rate indicating ceiling behavioral performance. Similarly, we found that conventional univariate analysis (analysis of variance) of N1 and P2 amplitude/latency failed to reveal strong modulations across CI-processed speech conditions. In contrast, multivariate discriminant analysis based on a combination of neural measures was used to create “neural confusion matrices” and identified a unique parameter set (1800/8) that maximally differentiated speech tokens at the neural level. This finding was corroborated by information transfer analysis which confirmed these settings optimally transmitted information in listeners’ neural and perceptual responses. Conclusions: Translated to actual implant patients, our findings suggest that scalp-recorded ERPs might be useful in determining optimal signal processing settings from among a closed set of parameter options and aid in the objective fitting of CI devices.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tFuN9y
via IFTTT

Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants

imageObjectives: The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Design: Thirteen children (mean age ± SD = 10 ± 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from −90° azimuth to +90° azimuth (15° interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0°), from one of the two side speakers (+90° or −90°), from both side speakers (±90°). Speech, spatial, and quality questionnaire was administered. Results: When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Conclusions: Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2tFwS51
via IFTTT