Παρασκευή 8 Σεπτεμβρίου 2017

Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

Purpose
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition.
Method
Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004).
Results
The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech.
Conclusions
As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile.
Supplemental Materials
http://ift.tt/2xiwlv4

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0270/2653957/Inner-Speechs-Relationship-With-Overt-Speech-in
via IFTTT

Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users

Purpose
To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users.
Method
Seventeen bimodal CI users (28–74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age.
Results
The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory.
Conclusions
Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0276/2653958/Speech-Recognition-and-Cognitive-Skills-in-Bimodal
via IFTTT

Effects of Lexical Variables on Silent Reading Comprehension in Individuals With Aphasia: Evidence From Eye Tracking

Purpose
Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length.
Methods
IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements.
Results
IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls.
Conclusions
IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0045/2653404/Effects-of-Lexical-Variables-on-Silent-Reading
via IFTTT

Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

Purpose
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition.
Method
Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004).
Results
The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech.
Conclusions
As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile.
Supplemental Materials
http://ift.tt/2xiwlv4

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0270/2653957/Inner-Speechs-Relationship-With-Overt-Speech-in
via IFTTT

Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users

Purpose
To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users.
Method
Seventeen bimodal CI users (28–74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age.
Results
The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory.
Conclusions
Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0276/2653958/Speech-Recognition-and-Cognitive-Skills-in-Bimodal
via IFTTT

Effects of Lexical Variables on Silent Reading Comprehension in Individuals With Aphasia: Evidence From Eye Tracking

Purpose
Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length.
Methods
IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements.
Results
IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls.
Conclusions
IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0045/2653404/Effects-of-Lexical-Variables-on-Silent-Reading
via IFTTT

Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

Purpose
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition.
Method
Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004).
Results
The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech.
Conclusions
As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile.
Supplemental Materials
http://ift.tt/2xiwlv4

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0270/2653957/Inner-Speechs-Relationship-With-Overt-Speech-in
via IFTTT

Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users

Purpose
To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users.
Method
Seventeen bimodal CI users (28–74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age.
Results
The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory.
Conclusions
Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0276/2653958/Speech-Recognition-and-Cognitive-Skills-in-Bimodal
via IFTTT

Effects of Lexical Variables on Silent Reading Comprehension in Individuals With Aphasia: Evidence From Eye Tracking

Purpose
Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length.
Methods
IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements.
Results
IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls.
Conclusions
IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0045/2653404/Effects-of-Lexical-Variables-on-Silent-Reading
via IFTTT

Global Hearing Health Care: New Perspectives

A recent paper by Wilson et al (2017) addressed the growing global burden of disease (GBD), which indicates an increasing—and now alarmingly high—burden of hearing loss worldwide.  According to the authors, hearing loss is the fourth leading contributor to years lived with disability (YLD) worldwide in 2015, up from the 11th-leading cause in 2010. 



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xRd3JF
via IFTTT

A Novel Loss-of-Function Mutation in HOXB1 Associated with Autosomal Recessive Hereditary Congenital Facial Palsy in a Large Iranian Family.

Related Articles

A Novel Loss-of-Function Mutation in HOXB1 Associated with Autosomal Recessive Hereditary Congenital Facial Palsy in a Large Iranian Family.

Mol Syndromol. 2017 Aug;8(5):261-265

Authors: Vahidi Mehrjardi MY, Maroofian R, Kalantar SM, Jaafarinia M, Chilton J, Dehghani M

Abstract
Hereditary congenital facial palsy (HCFP) is a rare congenital cranial dysinnervation disorder, recognisable by non-progressive isolated facial nerve palsy (cranial nerve VII). It is caused by developmental abnormalities of the facial nerve nucleus and its nerve. So far, 4 homozygous mutations have been identified in 5 unrelated families (12 patients) with HCFP worldwide. In this study, a large Iranian consanguineous kindred with 5 members affected by HCFP underwent thorough clinical and genetic evaluation. The candidate gene HOXB1 was screened and analysed by Sanger sequencing. As in previous cases, the most remarkable findings in the affected members of the family were mask-like faces, bilateral facial palsy with variable sensorineural hearing loss, and some dysmorphic features. Direct sequencing of the candidate gene HOXB1 identified a novel homozygous frameshift mutation (c.296_302del; p.Y99Wfs*20) which co-segregated with the disease phenotype within the extended family. Our findings expand the mutational spectrum of HOXB1 involved in HCFP and consolidate the role of the gene in the development of autosomal recessive HCFP. Moreover, the truncating mutation identified in this family leads to a broadly similar presentation and severity observed in previous patients with nonsense and missense mutations. This study characterises and defines the phenotypic features of this rare syndrome in a larger family than has previously been reported.

PMID: 28878610 [PubMed]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jatNJl
via IFTTT

Long-Term Progression of Sensorineural Hearing Loss and Tinnitus after Combined Intensity-Modulated Radiation Therapy and Cisplatin-Based Chemotherapy for Nasopharyngeal Carcinoma: A Case Report.

Related Articles

Long-Term Progression of Sensorineural Hearing Loss and Tinnitus after Combined Intensity-Modulated Radiation Therapy and Cisplatin-Based Chemotherapy for Nasopharyngeal Carcinoma: A Case Report.

Case Rep Oncol. 2017 May-Aug;10(2):743-751

Authors: Lee MS, Penumala S, Sweet S, De Luca RR, Stearnes AE, Akgun Y

Abstract
Sensorineural hearing loss (SNHL) is a common adverse effect for nasopharyngeal carcinoma (NPC) patients treated with chemoradiotherapy. We report a case of 12-year follow-up from a patient with stage IIB NPC, treated in 2004 with intensity-modulated radiotherapy and cisplatin-based chemotherapy. Pure-tone audiograms were conducted before treatment and at two other points in the 12-year period after treatment. Analysis of the patient's audiograms reveals that the development of high-frequency SNHL started after treatment and reached a plateau accompanied by tinnitus approximately 32 months after treatment conclusion. After the plateau, high-frequency SNHL continued to develop slowly in the next 10 years, possibly a long-term effect from radiation-induced microvascular change of the hearing apparatus. The continuous high-frequency hearing decline is associated with increased tinnitus pitch in the patient. With experience learned from this case, we recommend hearing tests at regular intervals for at least 3-5 years for NPC patients treated with chemoradiotherapy. Patients need to be educated about tinnitus and counseling can be offered when they begin to feel inconvenienced by tinnitus. These patients also need to be advised against exposure to noise that can aggravate the already compromised hearing apparatus, leading to further hearing loss and worsening tinnitus. Limiting the peak dose and total cumulative dose of cisplatin should be considered based on the patients' risk factors to achieve a balance between treatment efficacy and long-term adverse effects.

PMID: 28878660 [PubMed]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xh23cj
via IFTTT

Assessment of the expression and role of the α1-nAChR subunit in efferent cholinergic function during the development of the mammalian cochlea.

http:--highwire.stanford.edu-icons-exter https:--http://ift.tt/2bsbOVj Related Articles

Assessment of the expression and role of the α1-nAChR subunit in efferent cholinergic function during the development of the mammalian cochlea.

J Neurophysiol. 2016 Aug 01;116(2):479-92

Authors: Roux I, Wu JS, McIntosh JM, Glowatzki E

Abstract
Hair cell (HC) activity in the mammalian cochlea is modulated by cholinergic efferent inputs from the brainstem. These inhibitory inputs are mediated by calcium-permeable nicotinic acetylcholine receptors (nAChRs) containing α9- and α10-subunits and by subsequent activation of calcium-dependent potassium channels. Intriguingly, mRNAs of α1- and γ-nAChRs, subunits of the "muscle-type" nAChR have also been found in developing HCs (Cai T, Jen HI, Kang H, Klisch TJ, Zoghbi HY, Groves AK. J Neurosci 35: 5870-5883, 2015; Scheffer D, Sage C, Plazas PV, Huang M, Wedemeyer C, Zhang DS, Chen ZY, Elgoyhen AB, Corey DP, Pingault V. J Neurochem 103: 2651-2664, 2007; Sinkkonen ST, Chai R, Jan TA, Hartman BH, Laske RD, Gahlen F, Sinkkonen W, Cheng AG, Oshima K, Heller S. Sci Rep 1: 26, 2011) prompting proposals that another type of nAChR is present and may be critical during early synaptic development. Mouse genetics, histochemistry, pharmacology, and whole cell recording approaches were combined to test the role of α1-nAChR subunit in HC efferent synapse formation and cholinergic function. The onset of α1-mRNA expression in mouse HCs was found to coincide with the onset of the ACh response and efferent synaptic function. However, in mouse inner hair cells (IHCs) no response to the muscle-type nAChR agonists (±)-anatoxin A, (±)-epibatidine, (-)-nicotine, or 1,1-dimethyl-4-phenylpiperazinium iodide (DMPP) was detected, arguing against the presence of an independent functional α1-containing muscle-type nAChR in IHCs. In α1-deficient mice, no obvious change of IHC efferent innervation was detected at embryonic day 18, contrary to the hyperinnervation observed at the neuromuscular junction. Additionally, ACh response and efferent synaptic activity were detectable in α1-deficient IHCs, suggesting that α1 is not necessary for assembly and membrane targeting of nAChRs or for efferent synapse formation in IHCs.

PMID: 27098031 [PubMed - indexed for MEDLINE]



from #Audiology via ola Kala on Inoreader http://ift.tt/2vKKlJC
via IFTTT

Long-Term Progression of Sensorineural Hearing Loss and Tinnitus after Combined Intensity-Modulated Radiation Therapy and Cisplatin-Based Chemotherapy for Nasopharyngeal Carcinoma: A Case Report.

Related Articles

Long-Term Progression of Sensorineural Hearing Loss and Tinnitus after Combined Intensity-Modulated Radiation Therapy and Cisplatin-Based Chemotherapy for Nasopharyngeal Carcinoma: A Case Report.

Case Rep Oncol. 2017 May-Aug;10(2):743-751

Authors: Lee MS, Penumala S, Sweet S, De Luca RR, Stearnes AE, Akgun Y

Abstract
Sensorineural hearing loss (SNHL) is a common adverse effect for nasopharyngeal carcinoma (NPC) patients treated with chemoradiotherapy. We report a case of 12-year follow-up from a patient with stage IIB NPC, treated in 2004 with intensity-modulated radiotherapy and cisplatin-based chemotherapy. Pure-tone audiograms were conducted before treatment and at two other points in the 12-year period after treatment. Analysis of the patient's audiograms reveals that the development of high-frequency SNHL started after treatment and reached a plateau accompanied by tinnitus approximately 32 months after treatment conclusion. After the plateau, high-frequency SNHL continued to develop slowly in the next 10 years, possibly a long-term effect from radiation-induced microvascular change of the hearing apparatus. The continuous high-frequency hearing decline is associated with increased tinnitus pitch in the patient. With experience learned from this case, we recommend hearing tests at regular intervals for at least 3-5 years for NPC patients treated with chemoradiotherapy. Patients need to be educated about tinnitus and counseling can be offered when they begin to feel inconvenienced by tinnitus. These patients also need to be advised against exposure to noise that can aggravate the already compromised hearing apparatus, leading to further hearing loss and worsening tinnitus. Limiting the peak dose and total cumulative dose of cisplatin should be considered based on the patients' risk factors to achieve a balance between treatment efficacy and long-term adverse effects.

PMID: 28878660 [PubMed]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xh23cj
via IFTTT

Assessment of the expression and role of the α1-nAChR subunit in efferent cholinergic function during the development of the mammalian cochlea.

http:--highwire.stanford.edu-icons-exter https:--http://ift.tt/2bsbOVj Related Articles

Assessment of the expression and role of the α1-nAChR subunit in efferent cholinergic function during the development of the mammalian cochlea.

J Neurophysiol. 2016 Aug 01;116(2):479-92

Authors: Roux I, Wu JS, McIntosh JM, Glowatzki E

Abstract
Hair cell (HC) activity in the mammalian cochlea is modulated by cholinergic efferent inputs from the brainstem. These inhibitory inputs are mediated by calcium-permeable nicotinic acetylcholine receptors (nAChRs) containing α9- and α10-subunits and by subsequent activation of calcium-dependent potassium channels. Intriguingly, mRNAs of α1- and γ-nAChRs, subunits of the "muscle-type" nAChR have also been found in developing HCs (Cai T, Jen HI, Kang H, Klisch TJ, Zoghbi HY, Groves AK. J Neurosci 35: 5870-5883, 2015; Scheffer D, Sage C, Plazas PV, Huang M, Wedemeyer C, Zhang DS, Chen ZY, Elgoyhen AB, Corey DP, Pingault V. J Neurochem 103: 2651-2664, 2007; Sinkkonen ST, Chai R, Jan TA, Hartman BH, Laske RD, Gahlen F, Sinkkonen W, Cheng AG, Oshima K, Heller S. Sci Rep 1: 26, 2011) prompting proposals that another type of nAChR is present and may be critical during early synaptic development. Mouse genetics, histochemistry, pharmacology, and whole cell recording approaches were combined to test the role of α1-nAChR subunit in HC efferent synapse formation and cholinergic function. The onset of α1-mRNA expression in mouse HCs was found to coincide with the onset of the ACh response and efferent synaptic function. However, in mouse inner hair cells (IHCs) no response to the muscle-type nAChR agonists (±)-anatoxin A, (±)-epibatidine, (-)-nicotine, or 1,1-dimethyl-4-phenylpiperazinium iodide (DMPP) was detected, arguing against the presence of an independent functional α1-containing muscle-type nAChR in IHCs. In α1-deficient mice, no obvious change of IHC efferent innervation was detected at embryonic day 18, contrary to the hyperinnervation observed at the neuromuscular junction. Additionally, ACh response and efferent synaptic activity were detectable in α1-deficient IHCs, suggesting that α1 is not necessary for assembly and membrane targeting of nAChRs or for efferent synapse formation in IHCs.

PMID: 27098031 [PubMed - indexed for MEDLINE]



from #Audiology via ola Kala on Inoreader http://ift.tt/2vKKlJC
via IFTTT

Brimonidine Protects Auditory Hair Cells from in vitro-Induced Toxicity of Gentamicin

Brimonidine, an alpha-2 adrenergic receptor (α2-AR) agonist, has neuroprotective effects in the visual system and in spiral ganglion neurons. Auditory hair cells (HCs) express all 3 α2-AR subtypes, but their roles in HCs remain unknown. This study investigated the effects of brimonidine on auditory HCs that were also exposed to gentamicin, which is toxic to HCs. Organ of Corti explants were exposed to gentamicin in the presence or absence of brimonidine, and the α2-AR protein expression levels and Erk1/2 and Akt phosphorylation levels were determined. Brimonidine had a protective effect on auditory HCs against gentamicin-induced toxicity that was blocked by yohimbine. This suggested that the protective effect of brimonidine on HCs was mediated by the α2-AR. None of the treatments altered α2-AR protein expression levels, and brimonidine did not significantly change the activation levels of the Erk1/2 and Akt proteins. These observations indicated that brimonidine, acting directly via α2-AR, protects HCs from gentamicin-induced toxicity. Therefore, brimonidine shows potential for preventing or treating sensorineural hearing loss.
Audiol Neurotol 2017;22:125-134

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xhAiQw
via IFTTT

Assessing the Relationship Between the Electrically Evoked Compound Action Potential and Speech Recognition Abilities in Bilateral Cochlear Implant Recipients.

Objectives: The primary objective of the present study was to examine the relationship between suprathreshold electrically evoked compound action potential (ECAP) measures and speech recognition abilities in bilateral cochlear implant listeners. We tested the hypothesis that the magnitude of ear differences in ECAP measures within a subject (right-left) could predict the difference in speech recognition performance abilities between that subject's ears (right-left). Design: To better control for across-subject variables that contribute to speech understanding, the present study used a within-subject design. Subjects were 10 bilaterally implanted adult cochlear implant recipients. We measured ECAP amplitudes and slopes of the amplitude growth function in both ears for each subject. We examined how each of these measures changed when increasing the interphase gap of the biphasic pulses. Previous animal studies have shown correlations between these ECAP measures and auditory nerve survival. Speech recognition measures included speech reception thresholds for sentences in background noise, as well as phoneme discrimination in quiet and in noise. Results: Results showed that the between-ear difference (right-left) of one specific ECAP measure (increase in amplitude growth function slope as the interphase gap increased from 7 to 30 [micro]s) was significantly related to the between-ear difference (right-left) in speech recognition. Frequency-specific response patterns for ECAP data and consonant transmission cues support the hypothesis that this particular ECAP measure may represent localized functional acuity. Conclusions: The results add to a growing body of literature suggesting that when using a well-controlled research design, there is evidence that underlying neural function is related to postoperative performance with a cochlear implant. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2gME7m6
via IFTTT

Assessing the Relationship Between the Electrically Evoked Compound Action Potential and Speech Recognition Abilities in Bilateral Cochlear Implant Recipients.

Objectives: The primary objective of the present study was to examine the relationship between suprathreshold electrically evoked compound action potential (ECAP) measures and speech recognition abilities in bilateral cochlear implant listeners. We tested the hypothesis that the magnitude of ear differences in ECAP measures within a subject (right-left) could predict the difference in speech recognition performance abilities between that subject's ears (right-left). Design: To better control for across-subject variables that contribute to speech understanding, the present study used a within-subject design. Subjects were 10 bilaterally implanted adult cochlear implant recipients. We measured ECAP amplitudes and slopes of the amplitude growth function in both ears for each subject. We examined how each of these measures changed when increasing the interphase gap of the biphasic pulses. Previous animal studies have shown correlations between these ECAP measures and auditory nerve survival. Speech recognition measures included speech reception thresholds for sentences in background noise, as well as phoneme discrimination in quiet and in noise. Results: Results showed that the between-ear difference (right-left) of one specific ECAP measure (increase in amplitude growth function slope as the interphase gap increased from 7 to 30 [micro]s) was significantly related to the between-ear difference (right-left) in speech recognition. Frequency-specific response patterns for ECAP data and consonant transmission cues support the hypothesis that this particular ECAP measure may represent localized functional acuity. Conclusions: The results add to a growing body of literature suggesting that when using a well-controlled research design, there is evidence that underlying neural function is related to postoperative performance with a cochlear implant. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2gME7m6
via IFTTT

Assessing the Relationship Between the Electrically Evoked Compound Action Potential and Speech Recognition Abilities in Bilateral Cochlear Implant Recipients.

Objectives: The primary objective of the present study was to examine the relationship between suprathreshold electrically evoked compound action potential (ECAP) measures and speech recognition abilities in bilateral cochlear implant listeners. We tested the hypothesis that the magnitude of ear differences in ECAP measures within a subject (right-left) could predict the difference in speech recognition performance abilities between that subject's ears (right-left). Design: To better control for across-subject variables that contribute to speech understanding, the present study used a within-subject design. Subjects were 10 bilaterally implanted adult cochlear implant recipients. We measured ECAP amplitudes and slopes of the amplitude growth function in both ears for each subject. We examined how each of these measures changed when increasing the interphase gap of the biphasic pulses. Previous animal studies have shown correlations between these ECAP measures and auditory nerve survival. Speech recognition measures included speech reception thresholds for sentences in background noise, as well as phoneme discrimination in quiet and in noise. Results: Results showed that the between-ear difference (right-left) of one specific ECAP measure (increase in amplitude growth function slope as the interphase gap increased from 7 to 30 [micro]s) was significantly related to the between-ear difference (right-left) in speech recognition. Frequency-specific response patterns for ECAP data and consonant transmission cues support the hypothesis that this particular ECAP measure may represent localized functional acuity. Conclusions: The results add to a growing body of literature suggesting that when using a well-controlled research design, there is evidence that underlying neural function is related to postoperative performance with a cochlear implant. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2gME7m6
via IFTTT