Πέμπτη 6 Δεκεμβρίου 2018

The Acoustic Environments in Which Older Adults Wear Their Hearing Aids: Insights From Datalogging Sound Environment Classification

Purpose
This report presents data on the acoustic environments in which older adults with age-related hearing loss wear their hearing aids.
Method
This is an observational study providing descriptive data from 2 primary datasets: (a) 128 older adults wearing hearing aids for an average of 6 weeks and (b) 65 older adults wearing hearing aids for an average of 13 months. Acoustic environments were automatically and continuously classified about every 4 s, using the hearing aids' signal processing, into 1 of 7 acoustic environment categories.
Results
For both groups, older adults wore their hearing aids about 60% of the time in quiet or speech-only conditions. The automatic classification of sound environments was shown to be reliable over relatively short (6-week) and long (13-month) durations. Moreover, the results were shown to have some validity in that the obtained acoustic environment profiles matched a self-reported measure of social activity administered prior to hearing aid usage. For a subset of 56 older adults with data from both the 6-week and 13-month wear times, the daily amount of hearing aid usage diminished but the profile of sound environments frequented by the wearers remained stable.
Conclusions
Examination of the results from the automatic classification of sound environments by the hearing aids of older adults provides reliable and valid environment classifications. The present data indicate that most such wearers choose generally favorable acoustic environments for hearing aid use.

from #Audiology via ola Kala on Inoreader https://ift.tt/2R3RuzR
via IFTTT

Normative Data for a Rapid, Automated Test of Spatial Release From Masking

Purpose
The purpose of this study is to report normative data and predict thresholds for a rapid test of spatial release from masking for speech perception. The test is easily administered and has good repeatability, with the potential to be used in clinics and laboratories. Normative functions were generated for adults varying in age and amounts of hearing loss.
Method
The test of spatial release presents a virtual auditory scene over headphones with 2 conditions: colocated (with target and maskers at 0°) and spatially separated (with target at 0° and maskers at ± 45°). Listener thresholds are determined as target-to-masker ratios, and spatial release from masking (SRM) is determined as the difference between the colocated condition and spatially separated condition. Multiple linear regression was used to fit the data from 82 adults 18–80 years of age with normal to moderate hearing loss (0–40 dB HL pure-tone average [PTA]). The regression equations were then used to generate normative functions that relate age (in years) and hearing thresholds (as PTA) to target-to-masker ratios and SRM.
Results
Normative functions were able to predict thresholds with an error of less than 3.5 dB in all conditions. In the colocated condition, the function included only age as a predictive parameter, whereas in the spatially separated condition, both age and PTA were included as parameters. For SRM, PTA was the only significant predictor. Different functions were generated for the 1st run, the 2nd run, and the average of the 2 runs. All 3 functions were largely similar in form, with the smallest error being associated with the function on the basis of the average of 2 runs.
Conclusion
With the normative functions generated from this data set, it would be possible for a researcher or clinician to interpret data from a small number of participants or even a single patient without having to first collect data from a control group, substantially reducing the time and resources needed.
Supplemental Material
https://doi.org/10.23641/asha.7080878

from #Audiology via ola Kala on Inoreader https://ift.tt/2DkgwHU
via IFTTT

Learning Effects and the Sensory Organization Test: Influence of a Unilateral Peripheral Vestibular Impairment

Purpose
Healthy young controls exhibit a learning effect after undergoing repeated administrations of the sensory organization test (SOT). The primary objective of the present experiment was to determine if an SOT learning effect is present in individuals with a unilateral vestibular impairment (UVI), and if so, whether it is different from healthy controls. The secondary objective was to determine if the learning effect is dependent on the time frame of repeated SOT assessments.
Method
Eleven individuals diagnosed with a UVI and 11 controls underwent 6 repetitions of the SOT over 2 visits (3 per visit all within 1 week). A second control group underwent 3 SOT repetitions, with each repetition separated by 1 week, to evaluate the time course of the SOT learning effect.
Results
No statistically significant differences were found between the UVI group and the control group. In addition, the magnitude of the learning effect was found to be similar regardless of the length of time that separated the repetitions.
Conclusions
If the SOT is to be used as a measure of improvement, the learning effect should be exhausted (which typically occurs following the third administration) prior to the introduction of therapy. Future research should further investigate the results from those with other vestibular pathologies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Pli61m
via IFTTT

Wideband Absorbance and 226-Hz Tympanometry in the Prediction of Optimal Distortion Product Otoacoustic Emission Primary Tone Levels

Purpose
Distortion product otoacoustic emission (DPOAE) amplitude is sensitive to the primary tone level separation effective within the cochlea. Despite potential for middle ear sound transmission characteristics to affect this separation, no primary tone level optimization formula accounts for its influence. This study was conducted to determine if inclusion of ear- and frequency-specific immittance features improves primary tone level optimization formula performance beyond that achieved using a univariate, L 2-based formula.
Method
For 30 adults with normal hearing, DPOAE, wideband absorbance, and 226-Hz tympanometry measures were completed. A mixed linear modeling technique, incorporating both primary tone and acoustic immittance features, was used to generate a multivariable formula for the middle ear–specific recommendation of primary tone level separations for f 2 = 1–6 kHz. The accuracy with which L 1OPT, or the L 1 observed to maximize DPOAE level for each given L 2, could be predicted using the multivariable formula was then compared with that of a traditional, L 2-based univariate formula for each individual ear.
Results
Use of the multivariable formula L 1 = 0.47L 2 + 2.40A + f 2param + 38 [dB SPL] resulted in significantly more accurate L 1OPT predictions than did the univariate formula L 1 = 0.49L 2 + 41 [dB SPL]. Although average improvement was small, meaningful improvements were identified within individual ears, especially for f 2 = 1 and 6 kHz.
Conclusion
Incorporation of a wideband absorbance measure into a primary tone level optimization formula resulted in a minor average improvement in L 1OPT prediction accuracy when compared with a traditional univariate optimization formula. Further research is needed to identify characteristics of ears that might disproportionately benefit from the additional measure.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OiYnL5
via IFTTT

Effectiveness of Audiologist-Delivered Cognitive Behavioral Therapy for Tinnitus and Hyperacusis Rehabilitation: Outcomes for Patients Treated in Routine Practice

Objective
The aim was to assess the effectiveness of cognitive behavioral therapy (CBT) for tinnitus and/or hyperacusis delivered by audiologists working in the National Health Service in the United Kingdom.
Design
This was a retrospective study, based on questionnaires assessing tinnitus and hyperacusis and insomnia before and after CBT.
Study Sample
Data were gathered for 68 consecutive patients (average age = 52.5 years) who enrolled for CBT.
Results
All measures showed significant improvements after CBT. Effect sizes for patients who completed CBT were 1.13 for Tinnitus Handicap Inventory scores; 0.76 for Hyperacusis Questionnaire scores; 0.71, 0.95, and 0.93 for tinnitus loudness, annoyance, and effect on life, respectively, measured using the Visual Analog Scale; and 0.94 for the Insomnia Severity Index score. An analysis including those who dropped out also showed significant improvements for all measures.
Conclusion
Audiologist-delivered CBT led to significant improvements in self-report measures of tinnitus and hyperacusis handicap and insomnia. The methods described here may be used when designing future randomized controlled trials of efficacy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MHx9Nd
via IFTTT

A Comparison of Personal Sound Amplification Products and Hearing Aids in Ecologically Relevant Test Environments

Purpose
The aim of this study was to compare the benefit of self-adjusted personal sound amplification products (PSAPs) to audiologist-fitted hearing aids based on speech recognition, listening effort, and sound quality in ecologically relevant test conditions to estimate real-world effectiveness.
Method
Twenty-five older adults with bilateral mild-to-moderate hearing loss completed the single-blinded, crossover study. Participants underwent aided testing using 3 PSAPs and a traditional hearing aid, as well as unaided testing. PSAPs were adjusted based on participant preference, whereas the hearing aid was configured using best-practice verification protocols. Audibility provided by the devices was quantified using the Speech Intelligibility Index (American National Standards Institute, 2012). Outcome measures assessing speech recognition, listening effort, and sound quality were administered in ecologically relevant laboratory conditions designed to represent real-world speech listening situations.
Results
All devices significantly improved Speech Intelligibility Index compared to unaided listening, with the hearing aid providing more audibility than all PSAPs. Results further revealed that, in general, the hearing aid improved speech recognition performance and reduced listening effort significantly more than all PSAPs. Few differences in sound quality were observed between devices. All PSAPs improved speech recognition and listening effort compared to unaided testing.
Conclusions
Hearing aids fitted using best-practice verification protocols were capable of providing more aided audibility, better speech recognition performance, and lower listening effort compared to the PSAPs tested in the current study. Differences in sound quality between the devices were minimal. However, because all PSAPs tested in the study significantly improved participants' speech recognition performance and reduced listening effort compared to unaided listening, PSAPs could serve as a budget-friendly option for those who cannot afford traditional amplification.

from #Audiology via ola Kala on Inoreader https://ift.tt/2xtG8vX
via IFTTT

A Study of Social Media Utilization by Individuals With Tinnitus

Purpose
As more people experience tinnitus, social awareness of tinnitus has consequently increased, due in part to the Internet. Social media platforms are being used increasingly by patients to seek health-related information for various conditions including tinnitus. These online platforms may be used to seek guidance from and share experiences with individuals suffering from a similar disorder. Some social media platforms can also be used to communicate with health care providers. The aim of this study was to investigate the prevalence of tinnitus-related information on social media platforms.
Method
The present investigation analyzed the portrayal of tinnitus-related information across 3 social media platforms: Facebook (pages and groups), Twitter, and YouTube. We performed a comprehensive analysis of the platforms using the key words “tinnitus” and “ringing in the ears.” The results on each platform were manually examined by 2 reviewers based on social media activity metrics, such as “likes,” “followers,” and “comments.”
Results
The different social media platforms yielded diverse results, allowing individuals to learn about tinnitus, seek support, advocate for tinnitus awareness, and connect with medical professionals. The greatest activity was seen on Facebook pages, followed by YouTube videos. Various degrees of misinformation were found across all social media platforms.
Conclusions
The present investigation reveals copious amounts of tinnitus-related information on different social media platforms, which the community with tinnitus may use to learn about and cope with the condition. Audiologists must be aware that tinnitus sufferers often turn to social media for additional help and should understand the current climate of how tinnitus is portrayed. Clinicians should be equipped to steer individuals with tinnitus toward valid information.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NmGJu4
via IFTTT

Factors Associated With Self-Reported Hearing Aid Management Skills and Knowledge

Purpose
Hearing aid management describes the skills and knowledge required for the handling, use, care, and maintenance of the hearing aid. The importance of hearing aid management skills and knowledge is evidenced by their association with hearing aid outcomes. However, the nature of this association and the influence of participant factors on this association are unknown. Accordingly, the aims of the current study were to (a) investigate participant factors that influence hearing aid management skills and knowledge and (b) investigate the impact of hearing aid management skills and knowledge on hearing aid outcomes.
Method
Factors associated with hearing aid management skills and knowledge were investigated through an e-mail– and paper-based self-report survey, including the Hearing Aid Skills and Knowledge Inventory (Bennett, Meyer, Eikelboom, & Atlas, 2018b) and the International Outcomes Inventory for Hearing Aids (Cox & Alexander, 2002). The study sample included 518 adult hearing aid owners, ranging in age from 18 to 97 years (M = 71 years, SD = 14 years), 61% male and 39% female, recruited from seven hearing clinics across Australia.
Results
Participant factors found to be associated with hearing aid skills and knowledge included participants' age, gender, style of hearing aid, age of current hearing aid, and total years of hearing aid ownership. Higher levels of hearing aid management skills and knowledge were found to be associated with better hearing aid outcomes, specifically higher self-reported satisfaction with hearing aids, perceived benefit from hearing aids, and overall outcome of the hearing aid fitting as evaluated by the International Outcomes Inventory for Hearing Aids.
Conclusions
Hearing aid management difficulties were greatest for older people, women, and owners of behind-the-ear style of hearing aids, suggesting that clinicians need to be cognizant of the additional needs for these three groups. The positive association between hearing aid outcomes and hearing aid skills and knowledge emphasizes the importance of education and training on hearing aid management for successful aural rehabilitation.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NkGniO
via IFTTT

Accuracy of Smartphone Self-Hearing Test Applications Across Frequencies and Earphone Styles in Adults

Purpose
The purpose of this study is to evaluate smartphone-based self-hearing test applications (apps) for accuracy in threshold assessment and validity in screening for hearing loss across frequencies and earphone transducer styles.
Method
Twenty-two adult participants (10 = normal hearing; 12 = sensorineural hearing loss; n = 44 ears) underwent conventional audiometry and performed 6 self-administered hearing tests using two iPhone-based apps (App 1 = uHear [Version 2.0.2, Unitron]; App 2 = uHearingTest [Version 1.0.3, WooFu Tech, LLC.]) each with 3 different transducers (earbud earphones, supra-aural headphones, circumaural headphones). Hearing sensitivity results using the smartphone apps across frequencies and transducers were compared with conventional audiometry.
Results
Differences in accuracy were revealed between the hearing test apps across frequencies and earphone styles. The uHear app using the iPhone standard EarPod earbud earphones was accurate to conventional thresholds (p > .002 with Bonferroni correction) at 1000, 2000, 4000, and 6000 Hz and found valid (81%–100% sensitivity, specificity, positive and negative predictive values) for screening mild or greater hearing loss (> 25 dB HL) at 500, 1000, 2000, 4000, and 6000 Hz. The uHearingTest app was accurate in threshold assessment and determined valid for screening mild or greater hearing loss (> 25 dB HL) using supra-aural headphones at 2000, 4000, and 8000 Hz.
Conclusions
Self-hearing test apps can be accurate in hearing threshold assessment and screening for mild or greater hearing loss (> 25 dB HL) when using appropriate transducers. To ensure accuracy, manufacturers should specify earphone model instructions to users of smartphone-based self-hearing test apps.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Mto2zH
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader https://ift.tt/2roV2RB
via IFTTT

The Acoustic Environments in Which Older Adults Wear Their Hearing Aids: Insights From Datalogging Sound Environment Classification

Purpose
This report presents data on the acoustic environments in which older adults with age-related hearing loss wear their hearing aids.
Method
This is an observational study providing descriptive data from 2 primary datasets: (a) 128 older adults wearing hearing aids for an average of 6 weeks and (b) 65 older adults wearing hearing aids for an average of 13 months. Acoustic environments were automatically and continuously classified about every 4 s, using the hearing aids' signal processing, into 1 of 7 acoustic environment categories.
Results
For both groups, older adults wore their hearing aids about 60% of the time in quiet or speech-only conditions. The automatic classification of sound environments was shown to be reliable over relatively short (6-week) and long (13-month) durations. Moreover, the results were shown to have some validity in that the obtained acoustic environment profiles matched a self-reported measure of social activity administered prior to hearing aid usage. For a subset of 56 older adults with data from both the 6-week and 13-month wear times, the daily amount of hearing aid usage diminished but the profile of sound environments frequented by the wearers remained stable.
Conclusions
Examination of the results from the automatic classification of sound environments by the hearing aids of older adults provides reliable and valid environment classifications. The present data indicate that most such wearers choose generally favorable acoustic environments for hearing aid use.

from #Audiology via ola Kala on Inoreader https://ift.tt/2R3RuzR
via IFTTT

Normative Data for a Rapid, Automated Test of Spatial Release From Masking

Purpose
The purpose of this study is to report normative data and predict thresholds for a rapid test of spatial release from masking for speech perception. The test is easily administered and has good repeatability, with the potential to be used in clinics and laboratories. Normative functions were generated for adults varying in age and amounts of hearing loss.
Method
The test of spatial release presents a virtual auditory scene over headphones with 2 conditions: colocated (with target and maskers at 0°) and spatially separated (with target at 0° and maskers at ± 45°). Listener thresholds are determined as target-to-masker ratios, and spatial release from masking (SRM) is determined as the difference between the colocated condition and spatially separated condition. Multiple linear regression was used to fit the data from 82 adults 18–80 years of age with normal to moderate hearing loss (0–40 dB HL pure-tone average [PTA]). The regression equations were then used to generate normative functions that relate age (in years) and hearing thresholds (as PTA) to target-to-masker ratios and SRM.
Results
Normative functions were able to predict thresholds with an error of less than 3.5 dB in all conditions. In the colocated condition, the function included only age as a predictive parameter, whereas in the spatially separated condition, both age and PTA were included as parameters. For SRM, PTA was the only significant predictor. Different functions were generated for the 1st run, the 2nd run, and the average of the 2 runs. All 3 functions were largely similar in form, with the smallest error being associated with the function on the basis of the average of 2 runs.
Conclusion
With the normative functions generated from this data set, it would be possible for a researcher or clinician to interpret data from a small number of participants or even a single patient without having to first collect data from a control group, substantially reducing the time and resources needed.
Supplemental Material
https://doi.org/10.23641/asha.7080878

from #Audiology via ola Kala on Inoreader https://ift.tt/2DkgwHU
via IFTTT

Learning Effects and the Sensory Organization Test: Influence of a Unilateral Peripheral Vestibular Impairment

Purpose
Healthy young controls exhibit a learning effect after undergoing repeated administrations of the sensory organization test (SOT). The primary objective of the present experiment was to determine if an SOT learning effect is present in individuals with a unilateral vestibular impairment (UVI), and if so, whether it is different from healthy controls. The secondary objective was to determine if the learning effect is dependent on the time frame of repeated SOT assessments.
Method
Eleven individuals diagnosed with a UVI and 11 controls underwent 6 repetitions of the SOT over 2 visits (3 per visit all within 1 week). A second control group underwent 3 SOT repetitions, with each repetition separated by 1 week, to evaluate the time course of the SOT learning effect.
Results
No statistically significant differences were found between the UVI group and the control group. In addition, the magnitude of the learning effect was found to be similar regardless of the length of time that separated the repetitions.
Conclusions
If the SOT is to be used as a measure of improvement, the learning effect should be exhausted (which typically occurs following the third administration) prior to the introduction of therapy. Future research should further investigate the results from those with other vestibular pathologies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Pli61m
via IFTTT

Wideband Absorbance and 226-Hz Tympanometry in the Prediction of Optimal Distortion Product Otoacoustic Emission Primary Tone Levels

Purpose
Distortion product otoacoustic emission (DPOAE) amplitude is sensitive to the primary tone level separation effective within the cochlea. Despite potential for middle ear sound transmission characteristics to affect this separation, no primary tone level optimization formula accounts for its influence. This study was conducted to determine if inclusion of ear- and frequency-specific immittance features improves primary tone level optimization formula performance beyond that achieved using a univariate, L 2-based formula.
Method
For 30 adults with normal hearing, DPOAE, wideband absorbance, and 226-Hz tympanometry measures were completed. A mixed linear modeling technique, incorporating both primary tone and acoustic immittance features, was used to generate a multivariable formula for the middle ear–specific recommendation of primary tone level separations for f 2 = 1–6 kHz. The accuracy with which L 1OPT, or the L 1 observed to maximize DPOAE level for each given L 2, could be predicted using the multivariable formula was then compared with that of a traditional, L 2-based univariate formula for each individual ear.
Results
Use of the multivariable formula L 1 = 0.47L 2 + 2.40A + f 2param + 38 [dB SPL] resulted in significantly more accurate L 1OPT predictions than did the univariate formula L 1 = 0.49L 2 + 41 [dB SPL]. Although average improvement was small, meaningful improvements were identified within individual ears, especially for f 2 = 1 and 6 kHz.
Conclusion
Incorporation of a wideband absorbance measure into a primary tone level optimization formula resulted in a minor average improvement in L 1OPT prediction accuracy when compared with a traditional univariate optimization formula. Further research is needed to identify characteristics of ears that might disproportionately benefit from the additional measure.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OiYnL5
via IFTTT

Effectiveness of Audiologist-Delivered Cognitive Behavioral Therapy for Tinnitus and Hyperacusis Rehabilitation: Outcomes for Patients Treated in Routine Practice

Objective
The aim was to assess the effectiveness of cognitive behavioral therapy (CBT) for tinnitus and/or hyperacusis delivered by audiologists working in the National Health Service in the United Kingdom.
Design
This was a retrospective study, based on questionnaires assessing tinnitus and hyperacusis and insomnia before and after CBT.
Study Sample
Data were gathered for 68 consecutive patients (average age = 52.5 years) who enrolled for CBT.
Results
All measures showed significant improvements after CBT. Effect sizes for patients who completed CBT were 1.13 for Tinnitus Handicap Inventory scores; 0.76 for Hyperacusis Questionnaire scores; 0.71, 0.95, and 0.93 for tinnitus loudness, annoyance, and effect on life, respectively, measured using the Visual Analog Scale; and 0.94 for the Insomnia Severity Index score. An analysis including those who dropped out also showed significant improvements for all measures.
Conclusion
Audiologist-delivered CBT led to significant improvements in self-report measures of tinnitus and hyperacusis handicap and insomnia. The methods described here may be used when designing future randomized controlled trials of efficacy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MHx9Nd
via IFTTT

A Comparison of Personal Sound Amplification Products and Hearing Aids in Ecologically Relevant Test Environments

Purpose
The aim of this study was to compare the benefit of self-adjusted personal sound amplification products (PSAPs) to audiologist-fitted hearing aids based on speech recognition, listening effort, and sound quality in ecologically relevant test conditions to estimate real-world effectiveness.
Method
Twenty-five older adults with bilateral mild-to-moderate hearing loss completed the single-blinded, crossover study. Participants underwent aided testing using 3 PSAPs and a traditional hearing aid, as well as unaided testing. PSAPs were adjusted based on participant preference, whereas the hearing aid was configured using best-practice verification protocols. Audibility provided by the devices was quantified using the Speech Intelligibility Index (American National Standards Institute, 2012). Outcome measures assessing speech recognition, listening effort, and sound quality were administered in ecologically relevant laboratory conditions designed to represent real-world speech listening situations.
Results
All devices significantly improved Speech Intelligibility Index compared to unaided listening, with the hearing aid providing more audibility than all PSAPs. Results further revealed that, in general, the hearing aid improved speech recognition performance and reduced listening effort significantly more than all PSAPs. Few differences in sound quality were observed between devices. All PSAPs improved speech recognition and listening effort compared to unaided testing.
Conclusions
Hearing aids fitted using best-practice verification protocols were capable of providing more aided audibility, better speech recognition performance, and lower listening effort compared to the PSAPs tested in the current study. Differences in sound quality between the devices were minimal. However, because all PSAPs tested in the study significantly improved participants' speech recognition performance and reduced listening effort compared to unaided listening, PSAPs could serve as a budget-friendly option for those who cannot afford traditional amplification.

from #Audiology via ola Kala on Inoreader https://ift.tt/2xtG8vX
via IFTTT

A Study of Social Media Utilization by Individuals With Tinnitus

Purpose
As more people experience tinnitus, social awareness of tinnitus has consequently increased, due in part to the Internet. Social media platforms are being used increasingly by patients to seek health-related information for various conditions including tinnitus. These online platforms may be used to seek guidance from and share experiences with individuals suffering from a similar disorder. Some social media platforms can also be used to communicate with health care providers. The aim of this study was to investigate the prevalence of tinnitus-related information on social media platforms.
Method
The present investigation analyzed the portrayal of tinnitus-related information across 3 social media platforms: Facebook (pages and groups), Twitter, and YouTube. We performed a comprehensive analysis of the platforms using the key words “tinnitus” and “ringing in the ears.” The results on each platform were manually examined by 2 reviewers based on social media activity metrics, such as “likes,” “followers,” and “comments.”
Results
The different social media platforms yielded diverse results, allowing individuals to learn about tinnitus, seek support, advocate for tinnitus awareness, and connect with medical professionals. The greatest activity was seen on Facebook pages, followed by YouTube videos. Various degrees of misinformation were found across all social media platforms.
Conclusions
The present investigation reveals copious amounts of tinnitus-related information on different social media platforms, which the community with tinnitus may use to learn about and cope with the condition. Audiologists must be aware that tinnitus sufferers often turn to social media for additional help and should understand the current climate of how tinnitus is portrayed. Clinicians should be equipped to steer individuals with tinnitus toward valid information.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NmGJu4
via IFTTT

Factors Associated With Self-Reported Hearing Aid Management Skills and Knowledge

Purpose
Hearing aid management describes the skills and knowledge required for the handling, use, care, and maintenance of the hearing aid. The importance of hearing aid management skills and knowledge is evidenced by their association with hearing aid outcomes. However, the nature of this association and the influence of participant factors on this association are unknown. Accordingly, the aims of the current study were to (a) investigate participant factors that influence hearing aid management skills and knowledge and (b) investigate the impact of hearing aid management skills and knowledge on hearing aid outcomes.
Method
Factors associated with hearing aid management skills and knowledge were investigated through an e-mail– and paper-based self-report survey, including the Hearing Aid Skills and Knowledge Inventory (Bennett, Meyer, Eikelboom, & Atlas, 2018b) and the International Outcomes Inventory for Hearing Aids (Cox & Alexander, 2002). The study sample included 518 adult hearing aid owners, ranging in age from 18 to 97 years (M = 71 years, SD = 14 years), 61% male and 39% female, recruited from seven hearing clinics across Australia.
Results
Participant factors found to be associated with hearing aid skills and knowledge included participants' age, gender, style of hearing aid, age of current hearing aid, and total years of hearing aid ownership. Higher levels of hearing aid management skills and knowledge were found to be associated with better hearing aid outcomes, specifically higher self-reported satisfaction with hearing aids, perceived benefit from hearing aids, and overall outcome of the hearing aid fitting as evaluated by the International Outcomes Inventory for Hearing Aids.
Conclusions
Hearing aid management difficulties were greatest for older people, women, and owners of behind-the-ear style of hearing aids, suggesting that clinicians need to be cognizant of the additional needs for these three groups. The positive association between hearing aid outcomes and hearing aid skills and knowledge emphasizes the importance of education and training on hearing aid management for successful aural rehabilitation.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NkGniO
via IFTTT

Accuracy of Smartphone Self-Hearing Test Applications Across Frequencies and Earphone Styles in Adults

Purpose
The purpose of this study is to evaluate smartphone-based self-hearing test applications (apps) for accuracy in threshold assessment and validity in screening for hearing loss across frequencies and earphone transducer styles.
Method
Twenty-two adult participants (10 = normal hearing; 12 = sensorineural hearing loss; n = 44 ears) underwent conventional audiometry and performed 6 self-administered hearing tests using two iPhone-based apps (App 1 = uHear [Version 2.0.2, Unitron]; App 2 = uHearingTest [Version 1.0.3, WooFu Tech, LLC.]) each with 3 different transducers (earbud earphones, supra-aural headphones, circumaural headphones). Hearing sensitivity results using the smartphone apps across frequencies and transducers were compared with conventional audiometry.
Results
Differences in accuracy were revealed between the hearing test apps across frequencies and earphone styles. The uHear app using the iPhone standard EarPod earbud earphones was accurate to conventional thresholds (p > .002 with Bonferroni correction) at 1000, 2000, 4000, and 6000 Hz and found valid (81%–100% sensitivity, specificity, positive and negative predictive values) for screening mild or greater hearing loss (> 25 dB HL) at 500, 1000, 2000, 4000, and 6000 Hz. The uHearingTest app was accurate in threshold assessment and determined valid for screening mild or greater hearing loss (> 25 dB HL) using supra-aural headphones at 2000, 4000, and 8000 Hz.
Conclusions
Self-hearing test apps can be accurate in hearing threshold assessment and screening for mild or greater hearing loss (> 25 dB HL) when using appropriate transducers. To ensure accuracy, manufacturers should specify earphone model instructions to users of smartphone-based self-hearing test apps.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Mto2zH
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader https://ift.tt/2roV2RB
via IFTTT

Identifiers of Language Impairment for Spanish–English Dual Language Learners

Purpose
The purpose of this study was to determine if a standardized assessment developed for Spanish–English dual language learners (SEDLLs) differentiates SEDLLs with language impairment (LI) from children with typical language better than the translated/adapted Spanish and/or English version of a standardized assessment and to determine if adding informal measure/s to the standardized assessment increases the classification accuracy.
Method
Standardized and informal language assessment measures were administered to 30 Mexican American 4- to 5-year-old SEDLLs to determine the predictive value of each measure and the group of measures that best identified children with LI and typical language. Discriminant analyses were performed on the data set.
Results
The Morphosyntax and Semantics subtests of the Bilingual English–Spanish Assessment (Peña, Gutierrez-Clellen, Iglesias, Golstein, & Bedore, 2014) resulted in the largest effect size of the individual assessments with a sensitivity of 93.3% and a specificity of 86.7%. Combining these subtests with mean length of utterance in words from the child's better language sample (English or Spanish) was most accurate in identifying LI and can be used with above 90% confidence.
Conclusion
The Bilingual English–Spanish Assessment Morphosyntax and Semantics subtests were shown to comprise an effective measure for identifying LI; however, including a language sample is suggested to identify LI with greater accuracy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2SzJlDb
via IFTTT

Sketch and Speak: An Expository Intervention Using Note-Taking and Oral Practice for Children With Language-Related Learning Disabilities

Purpose
This preliminary study investigated an intervention procedure employing 2 types of note-taking and oral practice to improve expository reporting skills.
Procedure
Forty-four 4th to 6th graders with language-related learning disabilities from 9 schools were assigned to treatment or control conditions that were balanced for grade, oral language, and other features. The treatment condition received 6 30-min individual or pair sessions from the school of speech-language pathologists (SLPs). Treatment involved reducing statements from grade-level science articles into concise ideas, recording the ideas as pictographic and conventional notes, and expanding from the notes into full oral sentences that are then combined into oral reports. Participants were pretested and posttested on taking notes from grade-level history articles and using the notes to give oral reports. Posttesting also included written reports 1 to 3 days following the oral reports.
Results
The treatment group showed significantly greater improvement than the control group on multiple quality features of the notes and oral reports. Quantity, holistic oral quality, and delayed written reports were not significantly better. The SLPs reported high levels of student engagement and learning of skills and content within treatment. They attributed the perceived benefits to the elements of simplicity, visuals, oral practice, repeated opportunities, and visible progress.
Conclusion
This study indicates potential for Sketch and Speak to improve student performance in expository reporting and gives direction for strengthening and further investigating this novel SLP treatment.
Supplemental Material
https://doi.org/10.23641/asha.7268651

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ej4aPY
via IFTTT

Identifiers of Language Impairment for Spanish–English Dual Language Learners

Purpose
The purpose of this study was to determine if a standardized assessment developed for Spanish–English dual language learners (SEDLLs) differentiates SEDLLs with language impairment (LI) from children with typical language better than the translated/adapted Spanish and/or English version of a standardized assessment and to determine if adding informal measure/s to the standardized assessment increases the classification accuracy.
Method
Standardized and informal language assessment measures were administered to 30 Mexican American 4- to 5-year-old SEDLLs to determine the predictive value of each measure and the group of measures that best identified children with LI and typical language. Discriminant analyses were performed on the data set.
Results
The Morphosyntax and Semantics subtests of the Bilingual English–Spanish Assessment (Peña, Gutierrez-Clellen, Iglesias, Golstein, & Bedore, 2014) resulted in the largest effect size of the individual assessments with a sensitivity of 93.3% and a specificity of 86.7%. Combining these subtests with mean length of utterance in words from the child's better language sample (English or Spanish) was most accurate in identifying LI and can be used with above 90% confidence.
Conclusion
The Bilingual English–Spanish Assessment Morphosyntax and Semantics subtests were shown to comprise an effective measure for identifying LI; however, including a language sample is suggested to identify LI with greater accuracy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2SzJlDb
via IFTTT

Sketch and Speak: An Expository Intervention Using Note-Taking and Oral Practice for Children With Language-Related Learning Disabilities

Purpose
This preliminary study investigated an intervention procedure employing 2 types of note-taking and oral practice to improve expository reporting skills.
Procedure
Forty-four 4th to 6th graders with language-related learning disabilities from 9 schools were assigned to treatment or control conditions that were balanced for grade, oral language, and other features. The treatment condition received 6 30-min individual or pair sessions from the school of speech-language pathologists (SLPs). Treatment involved reducing statements from grade-level science articles into concise ideas, recording the ideas as pictographic and conventional notes, and expanding from the notes into full oral sentences that are then combined into oral reports. Participants were pretested and posttested on taking notes from grade-level history articles and using the notes to give oral reports. Posttesting also included written reports 1 to 3 days following the oral reports.
Results
The treatment group showed significantly greater improvement than the control group on multiple quality features of the notes and oral reports. Quantity, holistic oral quality, and delayed written reports were not significantly better. The SLPs reported high levels of student engagement and learning of skills and content within treatment. They attributed the perceived benefits to the elements of simplicity, visuals, oral practice, repeated opportunities, and visible progress.
Conclusion
This study indicates potential for Sketch and Speak to improve student performance in expository reporting and gives direction for strengthening and further investigating this novel SLP treatment.
Supplemental Material
https://doi.org/10.23641/asha.7268651

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ej4aPY
via IFTTT

The Development of American Sign Language–Based Analogical Reasoning in Signing Deaf Children

Purpose
This article examines whether syntactic and vocabulary abilities in American Sign Language (ASL) facilitate 6 categories of language-based analogical reasoning.
Method
Data for this study were collected from 267 deaf participants, aged 7;6 (years;months) to 18;5. The data were collected from an ongoing study initially funded by the U.S. Institute of Education Sciences in 2010. The participants were given assessments of ASL vocabulary and syntax knowledge and a task of language-based analogies presented in ASL. The data were analyzed using mixed-effects linear modeling to first see how language-based analogical reasoning developed in deaf children and then to see how ASL knowledge influenced this developmental trajectory.
Results
Signing deaf children were shown to demonstrate language-based reasoning abilities in ASL consistent with both chronological age and home language environment. Notably, when ASL vocabulary and syntax abilities were statistically taken into account, these were more important in fostering the development of language-based analogical reasoning abilities than were chronological age and home language. We further showed that ASL vocabulary ability and ASL syntactic knowledge made different contributions to different analogical reasoning subconstructs.
Conclusions
ASL is a viable language that supports the development of language-based analogical reasoning abilities in deaf children.

from #Audiology via ola Kala on Inoreader https://ift.tt/2REOGZN
via IFTTT

The Development of American Sign Language–Based Analogical Reasoning in Signing Deaf Children

Purpose
This article examines whether syntactic and vocabulary abilities in American Sign Language (ASL) facilitate 6 categories of language-based analogical reasoning.
Method
Data for this study were collected from 267 deaf participants, aged 7;6 (years;months) to 18;5. The data were collected from an ongoing study initially funded by the U.S. Institute of Education Sciences in 2010. The participants were given assessments of ASL vocabulary and syntax knowledge and a task of language-based analogies presented in ASL. The data were analyzed using mixed-effects linear modeling to first see how language-based analogical reasoning developed in deaf children and then to see how ASL knowledge influenced this developmental trajectory.
Results
Signing deaf children were shown to demonstrate language-based reasoning abilities in ASL consistent with both chronological age and home language environment. Notably, when ASL vocabulary and syntax abilities were statistically taken into account, these were more important in fostering the development of language-based analogical reasoning abilities than were chronological age and home language. We further showed that ASL vocabulary ability and ASL syntactic knowledge made different contributions to different analogical reasoning subconstructs.
Conclusions
ASL is a viable language that supports the development of language-based analogical reasoning abilities in deaf children.

from #Audiology via ola Kala on Inoreader https://ift.tt/2REOGZN
via IFTTT

Idiopathic Toe walking—a follow-up survey of gait analysis assessment

Publication date: Available online 6 December 2018

Source: Gait & Posture

Author(s): Rory O’Sullivan, Khalid Munir, Louise Keating

Abstract
Background

Toe-walking is a normal variant in children up to 3 years of age but beyond this a diagnosis of idiopathic toe-walking (ITW) must be considered. ITW is an umbrella term that covers all cases of toe-walking without any diagnosed underlying medical condition and before assigning these diagnosis potential differential diagnoses such as cerebral palsy, peripheral neuropathy, spinal dysraphism and myopathy must be ruled out. Gait laboratory assessment (GLA) is thought to be useful in the evaluation of ITW, and kinematic, kinetic and electromyography features associated with ITW have been described. However, the longer term robustness of a diagnosis based on GLA has not been investigated. The primary aim of this study was to examine if a diagnosis of ITW based on GLA features persisted.

Methods

All patients referred to a national gait laboratory service over a ten year period with queried ITW were sent a postal survey to establish if a diagnosis of ITW which had been offered following GLA persisted over time. The gait and clinical parameters differentiating those reported as typical ITW and not-typical-ITW following GLA were examined in the survey respondents.

Results

Of 102 referrals to the laboratory with queried ITW, a response rate of 40.2% (n = 41) was achieved. Of the respondents, 78% (n = 32) were found to be typical of ITW following GLA and this diagnosis persisted in the entire group at an average of 7 years post GLA. The other nine subjects were reported as not typical of ITW following GLA and 44.4% (n = 4) received a subsequent differential diagnosis. The clinical examination and gait analysis features differentiating these groups were consistent with previous literature.

Conclusion

GLA appears to be a useful objective tool in the assessment of ITW and a diagnosis based on described features persists in the long-term.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Pov7TJ
via IFTTT

A prediction method of speed-dependent walking patterns for healthy individuals

Publication date: Available online 5 December 2018

Source: Gait & Posture

Author(s): Claudiane A. Fukuchi, Marcos Duarte

Abstract
Background

Gait speed is one of the main biomechanical determinants of human movement patterns. However, in clinical gait analysis, the effect of gait speed is generally not considered, and people with disabilities are usually compared with able-bodied individuals even though disabled people tend to walk slower.

Research questions

This study proposes a simple way to predict the gait pattern of healthy individuals at a specific speed.

Methods

The method consists of creating a reference database for a range of gait speeds, and the gait-pattern prediction is implemented as follows: 1) the gait cycle is discretized from 0 to 100% for each variable, 2) a first or second-order polynomial is used to adjust the values of the reference dataset versus the corresponding gait speeds for each instant of the gait cycle to obtain the parameters of the regression, and 3) these regression parameters are then used to predict the new values of the gait pattern at any specific speed. Twenty-four healthy adults walked on the treadmill at eight different gait speeds, where the gait pattern was obtained by a 3D motion capture system and an instrumented treadmill.

Results

Overall, the predicted data presented good agreement with the experimental data for the joint angles and joint moments.

Significance

These results demonstrated that the proposed prediction method can be used to generate more unbiased reference data for clinical gait analysis and might be suitably applied to other speed-dependent human movement patterns.



from #Audiology via ola Kala on Inoreader https://ift.tt/2zLMOrb
via IFTTT

Calibration and Validation of Accelerometer-based Activity Monitors: A Systematic Review of Machine-Learning Approaches

Publication date: Available online 5 December 2018

Source: Gait & Posture

Author(s): Vahid Farrahi, Maisa Niemelä, Maarit Kangas, Raija Korpelainen, Timo Jämsä



from #Audiology via ola Kala on Inoreader https://ift.tt/2Pnzput
via IFTTT

Idiopathic Toe walking—a follow-up survey of gait analysis assessment

Publication date: Available online 6 December 2018

Source: Gait & Posture

Author(s): Rory O’Sullivan, Khalid Munir, Louise Keating

Abstract
Background

Toe-walking is a normal variant in children up to 3 years of age but beyond this a diagnosis of idiopathic toe-walking (ITW) must be considered. ITW is an umbrella term that covers all cases of toe-walking without any diagnosed underlying medical condition and before assigning these diagnosis potential differential diagnoses such as cerebral palsy, peripheral neuropathy, spinal dysraphism and myopathy must be ruled out. Gait laboratory assessment (GLA) is thought to be useful in the evaluation of ITW, and kinematic, kinetic and electromyography features associated with ITW have been described. However, the longer term robustness of a diagnosis based on GLA has not been investigated. The primary aim of this study was to examine if a diagnosis of ITW based on GLA features persisted.

Methods

All patients referred to a national gait laboratory service over a ten year period with queried ITW were sent a postal survey to establish if a diagnosis of ITW which had been offered following GLA persisted over time. The gait and clinical parameters differentiating those reported as typical ITW and not-typical-ITW following GLA were examined in the survey respondents.

Results

Of 102 referrals to the laboratory with queried ITW, a response rate of 40.2% (n = 41) was achieved. Of the respondents, 78% (n = 32) were found to be typical of ITW following GLA and this diagnosis persisted in the entire group at an average of 7 years post GLA. The other nine subjects were reported as not typical of ITW following GLA and 44.4% (n = 4) received a subsequent differential diagnosis. The clinical examination and gait analysis features differentiating these groups were consistent with previous literature.

Conclusion

GLA appears to be a useful objective tool in the assessment of ITW and a diagnosis based on described features persists in the long-term.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Pov7TJ
via IFTTT

A prediction method of speed-dependent walking patterns for healthy individuals

Publication date: Available online 5 December 2018

Source: Gait & Posture

Author(s): Claudiane A. Fukuchi, Marcos Duarte

Abstract
Background

Gait speed is one of the main biomechanical determinants of human movement patterns. However, in clinical gait analysis, the effect of gait speed is generally not considered, and people with disabilities are usually compared with able-bodied individuals even though disabled people tend to walk slower.

Research questions

This study proposes a simple way to predict the gait pattern of healthy individuals at a specific speed.

Methods

The method consists of creating a reference database for a range of gait speeds, and the gait-pattern prediction is implemented as follows: 1) the gait cycle is discretized from 0 to 100% for each variable, 2) a first or second-order polynomial is used to adjust the values of the reference dataset versus the corresponding gait speeds for each instant of the gait cycle to obtain the parameters of the regression, and 3) these regression parameters are then used to predict the new values of the gait pattern at any specific speed. Twenty-four healthy adults walked on the treadmill at eight different gait speeds, where the gait pattern was obtained by a 3D motion capture system and an instrumented treadmill.

Results

Overall, the predicted data presented good agreement with the experimental data for the joint angles and joint moments.

Significance

These results demonstrated that the proposed prediction method can be used to generate more unbiased reference data for clinical gait analysis and might be suitably applied to other speed-dependent human movement patterns.



from #Audiology via ola Kala on Inoreader https://ift.tt/2zLMOrb
via IFTTT

Calibration and Validation of Accelerometer-based Activity Monitors: A Systematic Review of Machine-Learning Approaches

Publication date: Available online 5 December 2018

Source: Gait & Posture

Author(s): Vahid Farrahi, Maisa Niemelä, Maarit Kangas, Raija Korpelainen, Timo Jämsä



from #Audiology via ola Kala on Inoreader https://ift.tt/2Pnzput
via IFTTT

Effect of blindness on mismatch responses to Mandarin lexical tones, consonants, and vowels

Publication date: January 2019

Source: Hearing Research, Volume 371

Author(s): Jie Feng, Chang Liu, Mingshuang Li, Hongjun Chen, Peng Sun, Ruibo Xie, Ying Zhao, Xinchun Wu

Abstract

According to the hypothesis of auditory compensation, blind listeners are more sensitive to auditory input than sighted listeners. In the current study, we employed the passive oddball paradigm to investigate the effect of blindness on listeners’ mismatch responses to Mandarin lexical tones, consonants, and vowels. Twelve blind and twelve sighted age- and verbal IQ-matched adults with normal hearing participated in this study. Our results indicated that blind listeners possibly had a more efficient pre-attentive processing (shorter MMN peak latency) of lexical tones in the tone-dominant hemisphere (i.e., the right hemisphere); and that they exhibited greater sensitivity (larger MMN amplitude) when processing phonemes (consonants and/or vowels) at the pre-attentive stage in both hemispheres compared with sighted individuals. However, we observed longer MMN and P3a peak latencies during phoneme processing in the blind versus control participants, indicating that blind listeners may be slower in terms of pre-attentive processing and involuntary attention switching when processing phonemes. This could be due to a lack of visual experience in the production and perception of phonemes. In a word, the current study revealed a two-sided influence of blindness on Mandarin speech perception.



from #Audiology via ola Kala on Inoreader https://ift.tt/2FJlXBw
via IFTTT

Differential responses to spectrally degraded speech within human auditory cortex: An intracranial electrophysiology study

Publication date: January 2019

Source: Hearing Research, Volume 371

Author(s): Kirill V. Nourski, Mitchell Steinschneider, Ariane E. Rhone, Christopher K. Kovach, Hiroto Kawasaki, Matthew A. Howard

Abstract

Understanding cortical processing of spectrally degraded speech in normal-hearing subjects may provide insights into how sound information is processed by cochlear implant (CI) users. This study investigated electrocorticographic (ECoG) responses to noise-vocoded speech and related these responses to behavioral performance in a phonemic identification task. Subjects were neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1–4 bands). ECoG responses were obtained from Heschl's gyrus (HG) and superior temporal gyrus (STG), and were examined within the high gamma frequency range (70–150 Hz). All subjects performed at chance accuracy with speech degraded to 1 and 2 spectral bands, and at or near ceiling for clear speech. Inter-subject variability was observed in the 3- and 4-band conditions. High gamma responses in posteromedial HG (auditory core cortex) were similar for all vocoded conditions and clear speech. A progressive preference for clear speech emerged in anterolateral segments of HG, regardless of behavioral performance. On the lateral STG, responses to all vocoded stimuli were larger in subjects with better task performance. In contrast, both behavioral and neural responses to clear speech were comparable across subjects regardless of their ability to identify degraded stimuli. Findings highlight differences in representation of spectrally degraded speech across cortical areas and their relationship to perception. The results are in agreement with prior non-invasive results. The data provide insight into the neural mechanisms associated with variability in perception of degraded speech and potentially into sources of such variability in CI users.



from #Audiology via ola Kala on Inoreader https://ift.tt/2DRqTTb
via IFTTT

Investigating peripheral sources of speech-in-noise variability in listeners with normal audiograms

Publication date: January 2019

Source: Hearing Research, Volume 371

Author(s): S.B. Smith, J. Krizman, C. Liu, T. White-Schwoch, T. Nicol, N. Kraus

Abstract

A current initiative in auditory neuroscience research is to better understand why some listeners struggle to perceive speech-in-noise (SIN) despite having normal hearing sensitivity. Various hypotheses regarding the physiologic bases of this disorder have been proposed. Notably, recent work has suggested that the site of lesion underlying SIN deficits in normal hearing listeners may be either in “sub-clinical” outer hair cell damage or synaptopathic degeneration at the inner hair cell-auditory nerve fiber synapse. In this study, we present a retrospective investigation of these peripheral sources and their relationship with SIN performance variability in one of the largest datasets of young normal-hearing listeners presented to date. 194 participants completed detailed case history questionnaires assessing noise exposure, SIN complaints, tinnitus, and hyperacusis. Standard and extended high frequency audiograms, distortion product otoacoustic emissions, click-evoked auditory brainstem responses, and SIN performance measures were also collected. We found that: 1) the prevalence of SIN deficits in normal hearing listeners was 42% when based on subjective report and 8% when based on SIN performance, 2) hearing complaints and hyperacusis were more common in listeners with self-reported noise exposure histories than controls, 3) neither extended high frequency thresholds nor compound action potential amplitudes differed between noise-exposed and control groups, 4) extended high frequency hearing thresholds and compound action potential amplitudes were not predictive of SIN performance. These results suggest an association between noise exposure and hearing complaints in young, normal hearing listeners; however, SIN performance variability is not explained by peripheral auditory function to the extent that these measures capture subtle physiologic differences between participants.



from #Audiology via ola Kala on Inoreader https://ift.tt/2DRCf9O
via IFTTT

Human middle-ear muscles rarely contract in anticipation of acoustic impulses: Implications for hearing risk assessments

Publication date: Available online 4 December 2018

Source: Hearing Research

Author(s): Heath G. Jones, Nathaniel T. Greene, William A. Ahroon

Abstract

The current study addressed the existence of an anticipatory middle-ear muscle contraction (MEMC) as a protective mechanism found in recent damage-risk criteria for impulse noise exposure. Specifically, the experiments reported here tested instances when an exposed individual was aware of and could anticipate the arrival of an acoustic impulse. In order to detect MEMCs in human subjects, a laser-Doppler vibrometer (LDV) was used to measure tympanic membrane (TM) motion in response to a probe tone. Here we directly measured the time course and relative magnitude changes of TM velocity in response to an acoustic reflex-eliciting (i.e. MEMC eliciting) impulse in 59 subjects with clinically assessable MEMCs. After verifying the presence of the MEMC, we used a classical conditioning paradigm pairing reflex-eliciting acoustic impulses (unconditioned stimulus, UCS) with various preceding stimuli (conditioned stimulus, CS). Changes in the time-course of the MEMC following conditioning were considered evidence of MEMC conditioning, and any indication of an MEMC prior to the onset of the acoustic elicitor was considered an anticipatory response. Nine subjects did not produce a MEMC measurable via LDV. For those subjects with an observable MEMC (n=50), 48 subjects (96%) did not show evidence of an anticipatory response after conditioning, whereas only 2 subjects (4%) did. These findings reveal that MEMCs are not readily conditioned in most individuals, suggesting that anticipatory MEMCs are not prevalent within the general population. The prevalence of anticipatory MEMCs does not appear to be sufficient to justify inclusion as a protective mechanism in auditory injury risk assessments.



from #Audiology via ola Kala on Inoreader https://ift.tt/2AVnT46
via IFTTT

Microvascular networks in the area of the auditory peripheral nervous system

Publication date: Available online 4 December 2018

Source: Hearing Research

Author(s): Han Jiang, Xiaohan Wang, Jinhui Zhang, Allan Kachelmeier, Ivan A. Lopez, Xiaorui Shi

Abstract

Using transgenic fluorescent reporter mice in combination with an established tissue clearing method, we detail heretofore optically opaque regions of the spiral lamina and spiral limbus where the auditory peripheral nervous system is located and provide insight into changes in cochlear vascular density with ageing. We found a relatively dense and branched vascular network in young adults, but a less dense and thinned network in aged adults. Significant reduction in vascular density starts early at the age of 180 days in the region of the spiral limbus (SL) and continues into old age at 540 days. Loss of vascular volume in the region of spiral ganglion neurons (SGN) is delayed until the age of 540 days. In addition, we observed that two vascular accessory cells are closely associated with the microvascular system: perivascular resident macrophages and pericytes. Morphologically, perivascular resident macrophages undergo drastic changes from postnatal P7 to young adult (P30). In postnatal animals, most perivascular resident macrophages exhibit a spherical or nodular shape. In young adult mice, the majority of perivascular resident macrophages are elongated and display an orientation parallel to the vessels. In our imaging, some of the perivascular resident macrophages are caught in the act of transmigrating from the blood circulation. Pericytes also display morphological heterogeneity. In the P7 mice, pericytes are prominent on the capillary walls, relatively large and punctate, and less uniform. In contrast, pericytes in the P30 mice are relatively flat and uniform, and less densely distributed on the vascular network. With triple fluorescence labeling, we did not find obvious physical connection between the two systems, unlike neuronal-vascular coupling found in brain. However, using a fluorescent (FITC-conjugated dextran) tracer and the enzymatic tracer horseradish peroxidase (HRP), we observed robust neurovascular exchange, likely through transcytotic transport, evidenced by multiple vesicles present in the endothelial cells. Taken together, our data demonstrate the effectiveness of tissue-clearing methods as an aid in imaging the vascular architecture of the SL and SGNs in whole mounted mouse cochlear preparations. Structure is indicative of function. The finding of differences in vascular structure in postnatal and young adult mice may correspond with variation in hearing refinement after birth and indicate the status of functional activity. The decrease in capillary network density in the older animals may reflect the decreased energy demand from peripheral neural activity. The finding of active transcytotic transport from blood to neurons opens a potential therapeutic avenue for delivery of various growth factors and gene vectors into the inner ear to target SGNs.



from #Audiology via ola Kala on Inoreader https://ift.tt/2SugJLI
via IFTTT

Noise-induced trauma produces a temporal pattern of change in blood levels of the outer hair cell biomarker prestin

Publication date: Available online 30 November 2018

Source: Hearing Research

Author(s): Kourosh Parham, Maheep Sohal, Mathieu Petremann, Charlotte Romanet, Audrey Broussy, Christophe Tran Van Ba, Jonas Dyhrfjeld-Johnsen

Abstract

Biomarkers in easy-to-access body fluid compartments, such as blood, are commonly used to assess health of various organ systems in clinical medicine. At present, no such biomarkers are available to inform on the health of the inner ear. Previously, we proposed the outer-hair-cell-specific protein prestin, as a possible biomarker and provided proof of concept in noise- and cisplatin-induced hearing loss. Our ototoxicity data suggest that circulatory prestin changes after inner ear injury are not static and that there is a temporal pattern of change that needs to be further characterized before practical information can be extracted. To achieve this goal, we set out to 1) describe the time course of change in prestin after intense noise exposure, and 2) determine if the temporal patterns and prestin levels are sensitive to severity of injury. After assessing auditory brainstem thresholds and distortion product otoacoustic emission levels, rats were exposed to intense octave band noise for 2 hours at either 110 or 120 dB SPL. Auditory function was re-assessed 1 and 14 days later. Blood samples were collected at baseline, 4, 24, 48, 72 hrs and 7 and 14 days post exposure and prestin concentrations were measured using enzyme-linked immunosorbent assay (ELISA). Functional measures showed temporary hearing loss 1 day after exposure in the 110 dB SPL group, but permanent loss through Day 14 in the 120 dB SPL group. Prestin levels temporarily increased 5% at 4 hrs after 120 dB SPL exposure, but not in the 110 dB SPL group. There was a gradual decline in prestin levels in both groups thereafter, with prestin being below baseline on Day 14 by 5% in the 110 dB group (NS) and more than 10% in the 120 dB SPL group (p = 0.043). These results suggest that there is a temporal pattern of change in serum prestin level after noise-induced hearing loss that is related to severity of hearing loss. Circulatory levels of prestin may be able to act as surrogate biomarker for hearing loss involving OHC loss.



from #Audiology via ola Kala on Inoreader https://ift.tt/2EbbZr4
via IFTTT

Multiphoton imaging for morphometry of the sandwich-beam structure of the human stapedial annular ligament

Publication date: Available online 29 November 2018

Source: Hearing Research

Author(s): Schär M, Dobrev I, Chatzimichalis M, Röösli C, Sim JH

Abstract
Background

The annular ligament of the human stapes constitutes a compliant connection between the stapes footplate and peripheral cochlear wall at the oval window. The cross section of the human annular ligament is characterized by a three-layered structure, which resembles a sandwich-shaped composite structure. As accurate and precise descriptions of the middle-ear behavior are constrained by lack of information on the complex geometry of the annular ligament, this study aims to obtain comprehensive geometrical data of the annular ligament via multiphoton imaging.

Methods

The region of interest containing the stapes and annular ligament were harvested from a fresh-frozen human temporal bone of a 46-years old female. Multiphoton imaging of the unstained sample was performed by detecting the second-harmonic generation of collagen and the autofluorescence of elastin, which are constituents of the annular ligament. The multiphoton scanning was conducted on the middle-ear side and cochlear side of the annular ligament to obtain accurate images of the face layers on both sides. The face layers of the annular ligament were manually segmented on both multiphoton scans, and then registered to high-resolution μCT images.

Results

Multiphoton scans of the annular ligament revealed 1) relatively large thickness of the core layer compared to the face layers, 2) asymmetric geometry of the face layers between the middle-ear side and cochlear side and variation of their thickness and width along the footplate boundary, 3) divergent relative alignment of the two face layers, and 4) different fiber composition of the face layers along the boundary with a collagen-reinforcement near the anterior pole on the middle-ear side.

Conclusion

and outlook: Multiphoton microscopy is a feasible approach to obtain the detailed three-dimensional features of the human stapedial annular ligament along its full boundary. The detailed description of the sandwich-shaped structures of the annular ligament is expected to contribute to modeling of the human middle ear for precise simulation of middle-ear behavior. Further, established methodology in this study may be applicable to imaging of other middle-ear structures.



from #Audiology via ola Kala on Inoreader https://ift.tt/2zxmp0n
via IFTTT

Development of intra-operative assessment system for ossicular mobility and middle ear transfer function

Publication date: Available online 22 November 2018

Source: Hearing Research

Author(s): Takuji Koike, Yuuka Irie, Ryo Ebine, Takaaki Fujishiro, Sho Kanzaki, Chee Sze Keat, Takenobu Higo, Kenji Ohoyama, Masaaki Hayashi, Hajime Ikegami

Abstract

Objective measurements of the ossicular mobility have not been commonly performed during the surgery, and the assessment of ossicular mobility is made by palpation in most cases. Palpation is inherently subjective and may not always be reliable, especially in milder degrees of ossicular fixation and in the case of multiple fixation. Although several devices have been developed to quantitatively measure the ossicular mobility during surgery, they have not been widely used. In this study, a new system with a hand-held probe which enables intraoperative quantitative measurements of ossicular mobility has been developed. This system not only measures the ossicular mobility, but also investigates “local” transmission characteristics of the middle ear by directly applying vibration to the ossicles and measuring cochlear microphonic. The basic performance of this system was confirmed by measuring the mobility of artificial ossicles and cochlear microphonics in an animal experiment. Our system may contribute to selection of a better surgical method and reducing the risks of revision surgery.



from #Audiology via ola Kala on Inoreader https://ift.tt/2S2n8NQ
via IFTTT

Effect of blindness on mismatch responses to Mandarin lexical tones, consonants, and vowels

Publication date: January 2019

Source: Hearing Research, Volume 371

Author(s): Jie Feng, Chang Liu, Mingshuang Li, Hongjun Chen, Peng Sun, Ruibo Xie, Ying Zhao, Xinchun Wu

Abstract

According to the hypothesis of auditory compensation, blind listeners are more sensitive to auditory input than sighted listeners. In the current study, we employed the passive oddball paradigm to investigate the effect of blindness on listeners’ mismatch responses to Mandarin lexical tones, consonants, and vowels. Twelve blind and twelve sighted age- and verbal IQ-matched adults with normal hearing participated in this study. Our results indicated that blind listeners possibly had a more efficient pre-attentive processing (shorter MMN peak latency) of lexical tones in the tone-dominant hemisphere (i.e., the right hemisphere); and that they exhibited greater sensitivity (larger MMN amplitude) when processing phonemes (consonants and/or vowels) at the pre-attentive stage in both hemispheres compared with sighted individuals. However, we observed longer MMN and P3a peak latencies during phoneme processing in the blind versus control participants, indicating that blind listeners may be slower in terms of pre-attentive processing and involuntary attention switching when processing phonemes. This could be due to a lack of visual experience in the production and perception of phonemes. In a word, the current study revealed a two-sided influence of blindness on Mandarin speech perception.



from #Audiology via ola Kala on Inoreader https://ift.tt/2FJlXBw
via IFTTT

Differential responses to spectrally degraded speech within human auditory cortex: An intracranial electrophysiology study

Publication date: January 2019

Source: Hearing Research, Volume 371

Author(s): Kirill V. Nourski, Mitchell Steinschneider, Ariane E. Rhone, Christopher K. Kovach, Hiroto Kawasaki, Matthew A. Howard

Abstract

Understanding cortical processing of spectrally degraded speech in normal-hearing subjects may provide insights into how sound information is processed by cochlear implant (CI) users. This study investigated electrocorticographic (ECoG) responses to noise-vocoded speech and related these responses to behavioral performance in a phonemic identification task. Subjects were neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1–4 bands). ECoG responses were obtained from Heschl's gyrus (HG) and superior temporal gyrus (STG), and were examined within the high gamma frequency range (70–150 Hz). All subjects performed at chance accuracy with speech degraded to 1 and 2 spectral bands, and at or near ceiling for clear speech. Inter-subject variability was observed in the 3- and 4-band conditions. High gamma responses in posteromedial HG (auditory core cortex) were similar for all vocoded conditions and clear speech. A progressive preference for clear speech emerged in anterolateral segments of HG, regardless of behavioral performance. On the lateral STG, responses to all vocoded stimuli were larger in subjects with better task performance. In contrast, both behavioral and neural responses to clear speech were comparable across subjects regardless of their ability to identify degraded stimuli. Findings highlight differences in representation of spectrally degraded speech across cortical areas and their relationship to perception. The results are in agreement with prior non-invasive results. The data provide insight into the neural mechanisms associated with variability in perception of degraded speech and potentially into sources of such variability in CI users.



from #Audiology via ola Kala on Inoreader https://ift.tt/2DRqTTb
via IFTTT

Investigating peripheral sources of speech-in-noise variability in listeners with normal audiograms

Publication date: January 2019

Source: Hearing Research, Volume 371

Author(s): S.B. Smith, J. Krizman, C. Liu, T. White-Schwoch, T. Nicol, N. Kraus

Abstract

A current initiative in auditory neuroscience research is to better understand why some listeners struggle to perceive speech-in-noise (SIN) despite having normal hearing sensitivity. Various hypotheses regarding the physiologic bases of this disorder have been proposed. Notably, recent work has suggested that the site of lesion underlying SIN deficits in normal hearing listeners may be either in “sub-clinical” outer hair cell damage or synaptopathic degeneration at the inner hair cell-auditory nerve fiber synapse. In this study, we present a retrospective investigation of these peripheral sources and their relationship with SIN performance variability in one of the largest datasets of young normal-hearing listeners presented to date. 194 participants completed detailed case history questionnaires assessing noise exposure, SIN complaints, tinnitus, and hyperacusis. Standard and extended high frequency audiograms, distortion product otoacoustic emissions, click-evoked auditory brainstem responses, and SIN performance measures were also collected. We found that: 1) the prevalence of SIN deficits in normal hearing listeners was 42% when based on subjective report and 8% when based on SIN performance, 2) hearing complaints and hyperacusis were more common in listeners with self-reported noise exposure histories than controls, 3) neither extended high frequency thresholds nor compound action potential amplitudes differed between noise-exposed and control groups, 4) extended high frequency hearing thresholds and compound action potential amplitudes were not predictive of SIN performance. These results suggest an association between noise exposure and hearing complaints in young, normal hearing listeners; however, SIN performance variability is not explained by peripheral auditory function to the extent that these measures capture subtle physiologic differences between participants.



from #Audiology via ola Kala on Inoreader https://ift.tt/2DRCf9O
via IFTTT

Human middle-ear muscles rarely contract in anticipation of acoustic impulses: Implications for hearing risk assessments

Publication date: Available online 4 December 2018

Source: Hearing Research

Author(s): Heath G. Jones, Nathaniel T. Greene, William A. Ahroon

Abstract

The current study addressed the existence of an anticipatory middle-ear muscle contraction (MEMC) as a protective mechanism found in recent damage-risk criteria for impulse noise exposure. Specifically, the experiments reported here tested instances when an exposed individual was aware of and could anticipate the arrival of an acoustic impulse. In order to detect MEMCs in human subjects, a laser-Doppler vibrometer (LDV) was used to measure tympanic membrane (TM) motion in response to a probe tone. Here we directly measured the time course and relative magnitude changes of TM velocity in response to an acoustic reflex-eliciting (i.e. MEMC eliciting) impulse in 59 subjects with clinically assessable MEMCs. After verifying the presence of the MEMC, we used a classical conditioning paradigm pairing reflex-eliciting acoustic impulses (unconditioned stimulus, UCS) with various preceding stimuli (conditioned stimulus, CS). Changes in the time-course of the MEMC following conditioning were considered evidence of MEMC conditioning, and any indication of an MEMC prior to the onset of the acoustic elicitor was considered an anticipatory response. Nine subjects did not produce a MEMC measurable via LDV. For those subjects with an observable MEMC (n=50), 48 subjects (96%) did not show evidence of an anticipatory response after conditioning, whereas only 2 subjects (4%) did. These findings reveal that MEMCs are not readily conditioned in most individuals, suggesting that anticipatory MEMCs are not prevalent within the general population. The prevalence of anticipatory MEMCs does not appear to be sufficient to justify inclusion as a protective mechanism in auditory injury risk assessments.



from #Audiology via ola Kala on Inoreader https://ift.tt/2AVnT46
via IFTTT

Microvascular networks in the area of the auditory peripheral nervous system

Publication date: Available online 4 December 2018

Source: Hearing Research

Author(s): Han Jiang, Xiaohan Wang, Jinhui Zhang, Allan Kachelmeier, Ivan A. Lopez, Xiaorui Shi

Abstract

Using transgenic fluorescent reporter mice in combination with an established tissue clearing method, we detail heretofore optically opaque regions of the spiral lamina and spiral limbus where the auditory peripheral nervous system is located and provide insight into changes in cochlear vascular density with ageing. We found a relatively dense and branched vascular network in young adults, but a less dense and thinned network in aged adults. Significant reduction in vascular density starts early at the age of 180 days in the region of the spiral limbus (SL) and continues into old age at 540 days. Loss of vascular volume in the region of spiral ganglion neurons (SGN) is delayed until the age of 540 days. In addition, we observed that two vascular accessory cells are closely associated with the microvascular system: perivascular resident macrophages and pericytes. Morphologically, perivascular resident macrophages undergo drastic changes from postnatal P7 to young adult (P30). In postnatal animals, most perivascular resident macrophages exhibit a spherical or nodular shape. In young adult mice, the majority of perivascular resident macrophages are elongated and display an orientation parallel to the vessels. In our imaging, some of the perivascular resident macrophages are caught in the act of transmigrating from the blood circulation. Pericytes also display morphological heterogeneity. In the P7 mice, pericytes are prominent on the capillary walls, relatively large and punctate, and less uniform. In contrast, pericytes in the P30 mice are relatively flat and uniform, and less densely distributed on the vascular network. With triple fluorescence labeling, we did not find obvious physical connection between the two systems, unlike neuronal-vascular coupling found in brain. However, using a fluorescent (FITC-conjugated dextran) tracer and the enzymatic tracer horseradish peroxidase (HRP), we observed robust neurovascular exchange, likely through transcytotic transport, evidenced by multiple vesicles present in the endothelial cells. Taken together, our data demonstrate the effectiveness of tissue-clearing methods as an aid in imaging the vascular architecture of the SL and SGNs in whole mounted mouse cochlear preparations. Structure is indicative of function. The finding of differences in vascular structure in postnatal and young adult mice may correspond with variation in hearing refinement after birth and indicate the status of functional activity. The decrease in capillary network density in the older animals may reflect the decreased energy demand from peripheral neural activity. The finding of active transcytotic transport from blood to neurons opens a potential therapeutic avenue for delivery of various growth factors and gene vectors into the inner ear to target SGNs.



from #Audiology via ola Kala on Inoreader https://ift.tt/2SugJLI
via IFTTT

Noise-induced trauma produces a temporal pattern of change in blood levels of the outer hair cell biomarker prestin

Publication date: Available online 30 November 2018

Source: Hearing Research

Author(s): Kourosh Parham, Maheep Sohal, Mathieu Petremann, Charlotte Romanet, Audrey Broussy, Christophe Tran Van Ba, Jonas Dyhrfjeld-Johnsen

Abstract

Biomarkers in easy-to-access body fluid compartments, such as blood, are commonly used to assess health of various organ systems in clinical medicine. At present, no such biomarkers are available to inform on the health of the inner ear. Previously, we proposed the outer-hair-cell-specific protein prestin, as a possible biomarker and provided proof of concept in noise- and cisplatin-induced hearing loss. Our ototoxicity data suggest that circulatory prestin changes after inner ear injury are not static and that there is a temporal pattern of change that needs to be further characterized before practical information can be extracted. To achieve this goal, we set out to 1) describe the time course of change in prestin after intense noise exposure, and 2) determine if the temporal patterns and prestin levels are sensitive to severity of injury. After assessing auditory brainstem thresholds and distortion product otoacoustic emission levels, rats were exposed to intense octave band noise for 2 hours at either 110 or 120 dB SPL. Auditory function was re-assessed 1 and 14 days later. Blood samples were collected at baseline, 4, 24, 48, 72 hrs and 7 and 14 days post exposure and prestin concentrations were measured using enzyme-linked immunosorbent assay (ELISA). Functional measures showed temporary hearing loss 1 day after exposure in the 110 dB SPL group, but permanent loss through Day 14 in the 120 dB SPL group. Prestin levels temporarily increased 5% at 4 hrs after 120 dB SPL exposure, but not in the 110 dB SPL group. There was a gradual decline in prestin levels in both groups thereafter, with prestin being below baseline on Day 14 by 5% in the 110 dB group (NS) and more than 10% in the 120 dB SPL group (p = 0.043). These results suggest that there is a temporal pattern of change in serum prestin level after noise-induced hearing loss that is related to severity of hearing loss. Circulatory levels of prestin may be able to act as surrogate biomarker for hearing loss involving OHC loss.



from #Audiology via ola Kala on Inoreader https://ift.tt/2EbbZr4
via IFTTT

Multiphoton imaging for morphometry of the sandwich-beam structure of the human stapedial annular ligament

Publication date: Available online 29 November 2018

Source: Hearing Research

Author(s): Schär M, Dobrev I, Chatzimichalis M, Röösli C, Sim JH

Abstract
Background

The annular ligament of the human stapes constitutes a compliant connection between the stapes footplate and peripheral cochlear wall at the oval window. The cross section of the human annular ligament is characterized by a three-layered structure, which resembles a sandwich-shaped composite structure. As accurate and precise descriptions of the middle-ear behavior are constrained by lack of information on the complex geometry of the annular ligament, this study aims to obtain comprehensive geometrical data of the annular ligament via multiphoton imaging.

Methods

The region of interest containing the stapes and annular ligament were harvested from a fresh-frozen human temporal bone of a 46-years old female. Multiphoton imaging of the unstained sample was performed by detecting the second-harmonic generation of collagen and the autofluorescence of elastin, which are constituents of the annular ligament. The multiphoton scanning was conducted on the middle-ear side and cochlear side of the annular ligament to obtain accurate images of the face layers on both sides. The face layers of the annular ligament were manually segmented on both multiphoton scans, and then registered to high-resolution μCT images.

Results

Multiphoton scans of the annular ligament revealed 1) relatively large thickness of the core layer compared to the face layers, 2) asymmetric geometry of the face layers between the middle-ear side and cochlear side and variation of their thickness and width along the footplate boundary, 3) divergent relative alignment of the two face layers, and 4) different fiber composition of the face layers along the boundary with a collagen-reinforcement near the anterior pole on the middle-ear side.

Conclusion

and outlook: Multiphoton microscopy is a feasible approach to obtain the detailed three-dimensional features of the human stapedial annular ligament along its full boundary. The detailed description of the sandwich-shaped structures of the annular ligament is expected to contribute to modeling of the human middle ear for precise simulation of middle-ear behavior. Further, established methodology in this study may be applicable to imaging of other middle-ear structures.



from #Audiology via ola Kala on Inoreader https://ift.tt/2zxmp0n
via IFTTT

Development of intra-operative assessment system for ossicular mobility and middle ear transfer function

Publication date: Available online 22 November 2018

Source: Hearing Research

Author(s): Takuji Koike, Yuuka Irie, Ryo Ebine, Takaaki Fujishiro, Sho Kanzaki, Chee Sze Keat, Takenobu Higo, Kenji Ohoyama, Masaaki Hayashi, Hajime Ikegami

Abstract

Objective measurements of the ossicular mobility have not been commonly performed during the surgery, and the assessment of ossicular mobility is made by palpation in most cases. Palpation is inherently subjective and may not always be reliable, especially in milder degrees of ossicular fixation and in the case of multiple fixation. Although several devices have been developed to quantitatively measure the ossicular mobility during surgery, they have not been widely used. In this study, a new system with a hand-held probe which enables intraoperative quantitative measurements of ossicular mobility has been developed. This system not only measures the ossicular mobility, but also investigates “local” transmission characteristics of the middle ear by directly applying vibration to the ossicles and measuring cochlear microphonic. The basic performance of this system was confirmed by measuring the mobility of artificial ossicles and cochlear microphonics in an animal experiment. Our system may contribute to selection of a better surgical method and reducing the risks of revision surgery.



from #Audiology via ola Kala on Inoreader https://ift.tt/2S2n8NQ
via IFTTT