Τρίτη 12 Ιουνίου 2018

The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and Evaluation

Objectives: The objectives of this research were to develop and evaluate a self-report questionnaire (the Emotional Communication in Hearing Questionnaire or EMO-CHeQ) designed to assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. Design: Study 1 involved internet-based administration of a 42-item version of the EMO-CHeQ to 586 adult participants (243 with self-reported normal hearing [NH], 193 with self-reported hearing impairment but no reported use of hearing aids [HI], and 150 with self-reported hearing impairment and use of hearing aids [HA]). To better understand the factor structure of the EMO-CHeQ and eliminate redundant items, an exploratory factor analysis was conducted. Study 2 involved laboratory-based administration of a 16-item version of the EMO-CHeQ to 32 adult participants (12 normal hearing/near normal hearing (NH/nNH), 10 HI, and 10 HA). In addition, participants completed an emotion-identification task under audio and audiovisual conditions. Results: In study 1, the exploratory factor analysis yielded an interpretable solution with four factors emerging that explained a total of 66.3% of the variance in performance the EMO-CHeQ. Item deletion resulted in construction of the 16-item EMO-CHeQ. In study 1, both the HI and HA group reported greater vocal emotion communication handicap on the EMO-CHeQ than on the NH group, but differences in handicap were not observed between the HI and HA group. In study 2, the same pattern of reported handicap was observed in individuals with audiometrically verified hearing as was found in study 1. On the emotion-identification task, no group differences in performance were observed in the audiovisual condition, but group differences were observed in the audio alone condition. Although the HI and HA group exhibited similar emotion-identification performance, both groups performed worse than the NH/nNH group, thus suggesting the presence of behavioral deficits that parallel self-reported vocal emotion communication handicap. The EMO-CHeQ was significantly and strongly (r = −0.64) correlated with performance on the emotion-identification task for listeners with hearing impairment. Conclusions: The results from both studies suggest that the EMO-CHeQ appears to be a reliable and ecologically valid measure to rapidly assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The authors have no conflicts of interest to disclose. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received May 17, 2017; accepted April 4, 2018. Address for correspondence: Gurjit Singh, Department of Psychology, Ryerson University, 350 Victoria Street, Toronto, Ontario, Canada M5B 2K3. E-mail: Gurjit.singh@phonak.com Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2t2d8ZZ
via IFTTT

Failure on the Foam Eyes Closed Test of Standing Balance Associated With Reduced Semicircular Canal Function in Healthy Older Adults

Objectives: Standing on foam with eyes closed (FOEC) has been characterized as a measure of vestibular function; however, the relative contribution of vestibular function and proprioceptive function to the FOEC test has not been well described. In this study, the authors investigate the relationship between peripheral sensory systems (vestibular and proprioception) and performance on the FOEC test in a cohort of healthy adults. Design: A total of 563 community-dwelling healthy adults (mean age, 72.7 [SD, 12.6] years; range, 27 to 93 years) participating in the Baltimore Longitudinal Study of Aging were tested. Proprioceptive threshold (PROP) was evaluated with passive motion detection at the right ankle. Vestibulo-ocular reflex (VOR) gain was measured using video head impulses. Otolith function was measured with cervical and ocular vestibular-evoked myogenic potentials. Participants stood on FOEC for 40 sec while wearing BalanSens (BioSensics, LLC, Watertown, MA) to quantify center of mass sway area. A mixed-model multiple logistic regression was used to examine the odds of passing the FOEC test based on PROP, VOR, cervical vestibular-evoked myogenic potential, and ocular vestibular-evoked myogenic potential function in a multisensory model while controlling for age and gender. Results: The odds of passing the FOEC test decreased by 15% (p

from #Audiology via ola Kala on Inoreader https://ift.tt/2l7UyvP
via IFTTT

A “Goldilocks” Approach to Hearing Aid Self-Fitting: Ear-Canal Output and Speech Intelligibility Index

Objectives: The objective was to determine self-adjusted output response and speech intelligibility index (SII) in individuals with mild to moderate hearing loss and to measure the effects of prior hearing aid experience. Design: Thirteen hearing aid users and 13 nonusers, with similar group-mean pure-tone thresholds, listened to prerecorded and preprocessed sentences spoken by a man. Starting with a generic level and spectrum, participants adjusted (1) overall level, (2) high-frequency boost, and (3) low-frequency cut. Participants took a speech perception test after an initial adjustment before making a final adjustment. The three self-selected parameters, along with individual thresholds and real-ear-to-coupler differences, were used to compute output levels and SIIs for the starting and two self-adjusted conditions. The values were compared with an NAL second nonlinear threshold-based prescription (NAL-NL2) and, for the hearing aid users, performance of their existing hearing aids. Results: All participants were able to complete the self-adjustment process. The generic starting condition provided outputs (between 2 and 8 kHz) and SIIs that were significantly below those prescribed by NAL-NL2. Both groups increased SII to values that were not significantly different from prescription. The hearing aid users, but not the nonusers, increased high-frequency output and SII significantly after taking the speech perception test. Seventeen of the 26 participants (65%) met an SII criterion of 60% under the generic starting condition. The proportion increased to 23 out of 26 (88%) after the final self-adjustment. Of the 13 hearing aid users, 8 (62%) met the 60% criterion with their existing hearing aids. With the final self-adjustment, 12 out of 13 (92%) met this criterion. Conclusions: The findings support the conclusion that user self-adjustment of basic amplification characteristics can be both feasible and effective with or without prior hearing aid experience. ACKNOWLEDGMENTS: This work was supported by National Institute on Deafness and Other Communication Disorders grant numbers R21DC015046 and R33DC015046 to San Diego State University (Dr. Carol Mackersie, Principal Investigator, and Harinath Garudadri, University of California San Diego, Co-Principal Investigator). The authors have no conflicts of interest to disclose. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received December 5, 2017; accepted April 11, 2018. Address for correspondence: Carol Mackersie, School of Speech, Language, and Hearing Sciences, San Diego State University, 5500 Campanile Drive, MC-1518, San Diego, CA 92182, USA. E-mail: cmackers@sdsu.edu Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2t1ClDZ
via IFTTT

The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and Evaluation

Objectives: The objectives of this research were to develop and evaluate a self-report questionnaire (the Emotional Communication in Hearing Questionnaire or EMO-CHeQ) designed to assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. Design: Study 1 involved internet-based administration of a 42-item version of the EMO-CHeQ to 586 adult participants (243 with self-reported normal hearing [NH], 193 with self-reported hearing impairment but no reported use of hearing aids [HI], and 150 with self-reported hearing impairment and use of hearing aids [HA]). To better understand the factor structure of the EMO-CHeQ and eliminate redundant items, an exploratory factor analysis was conducted. Study 2 involved laboratory-based administration of a 16-item version of the EMO-CHeQ to 32 adult participants (12 normal hearing/near normal hearing (NH/nNH), 10 HI, and 10 HA). In addition, participants completed an emotion-identification task under audio and audiovisual conditions. Results: In study 1, the exploratory factor analysis yielded an interpretable solution with four factors emerging that explained a total of 66.3% of the variance in performance the EMO-CHeQ. Item deletion resulted in construction of the 16-item EMO-CHeQ. In study 1, both the HI and HA group reported greater vocal emotion communication handicap on the EMO-CHeQ than on the NH group, but differences in handicap were not observed between the HI and HA group. In study 2, the same pattern of reported handicap was observed in individuals with audiometrically verified hearing as was found in study 1. On the emotion-identification task, no group differences in performance were observed in the audiovisual condition, but group differences were observed in the audio alone condition. Although the HI and HA group exhibited similar emotion-identification performance, both groups performed worse than the NH/nNH group, thus suggesting the presence of behavioral deficits that parallel self-reported vocal emotion communication handicap. The EMO-CHeQ was significantly and strongly (r = −0.64) correlated with performance on the emotion-identification task for listeners with hearing impairment. Conclusions: The results from both studies suggest that the EMO-CHeQ appears to be a reliable and ecologically valid measure to rapidly assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The authors have no conflicts of interest to disclose. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received May 17, 2017; accepted April 4, 2018. Address for correspondence: Gurjit Singh, Department of Psychology, Ryerson University, 350 Victoria Street, Toronto, Ontario, Canada M5B 2K3. E-mail: Gurjit.singh@phonak.com Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2t2d8ZZ
via IFTTT

Failure on the Foam Eyes Closed Test of Standing Balance Associated With Reduced Semicircular Canal Function in Healthy Older Adults

Objectives: Standing on foam with eyes closed (FOEC) has been characterized as a measure of vestibular function; however, the relative contribution of vestibular function and proprioceptive function to the FOEC test has not been well described. In this study, the authors investigate the relationship between peripheral sensory systems (vestibular and proprioception) and performance on the FOEC test in a cohort of healthy adults. Design: A total of 563 community-dwelling healthy adults (mean age, 72.7 [SD, 12.6] years; range, 27 to 93 years) participating in the Baltimore Longitudinal Study of Aging were tested. Proprioceptive threshold (PROP) was evaluated with passive motion detection at the right ankle. Vestibulo-ocular reflex (VOR) gain was measured using video head impulses. Otolith function was measured with cervical and ocular vestibular-evoked myogenic potentials. Participants stood on FOEC for 40 sec while wearing BalanSens (BioSensics, LLC, Watertown, MA) to quantify center of mass sway area. A mixed-model multiple logistic regression was used to examine the odds of passing the FOEC test based on PROP, VOR, cervical vestibular-evoked myogenic potential, and ocular vestibular-evoked myogenic potential function in a multisensory model while controlling for age and gender. Results: The odds of passing the FOEC test decreased by 15% (p

from #Audiology via ola Kala on Inoreader https://ift.tt/2l7UyvP
via IFTTT

A “Goldilocks” Approach to Hearing Aid Self-Fitting: Ear-Canal Output and Speech Intelligibility Index

Objectives: The objective was to determine self-adjusted output response and speech intelligibility index (SII) in individuals with mild to moderate hearing loss and to measure the effects of prior hearing aid experience. Design: Thirteen hearing aid users and 13 nonusers, with similar group-mean pure-tone thresholds, listened to prerecorded and preprocessed sentences spoken by a man. Starting with a generic level and spectrum, participants adjusted (1) overall level, (2) high-frequency boost, and (3) low-frequency cut. Participants took a speech perception test after an initial adjustment before making a final adjustment. The three self-selected parameters, along with individual thresholds and real-ear-to-coupler differences, were used to compute output levels and SIIs for the starting and two self-adjusted conditions. The values were compared with an NAL second nonlinear threshold-based prescription (NAL-NL2) and, for the hearing aid users, performance of their existing hearing aids. Results: All participants were able to complete the self-adjustment process. The generic starting condition provided outputs (between 2 and 8 kHz) and SIIs that were significantly below those prescribed by NAL-NL2. Both groups increased SII to values that were not significantly different from prescription. The hearing aid users, but not the nonusers, increased high-frequency output and SII significantly after taking the speech perception test. Seventeen of the 26 participants (65%) met an SII criterion of 60% under the generic starting condition. The proportion increased to 23 out of 26 (88%) after the final self-adjustment. Of the 13 hearing aid users, 8 (62%) met the 60% criterion with their existing hearing aids. With the final self-adjustment, 12 out of 13 (92%) met this criterion. Conclusions: The findings support the conclusion that user self-adjustment of basic amplification characteristics can be both feasible and effective with or without prior hearing aid experience. ACKNOWLEDGMENTS: This work was supported by National Institute on Deafness and Other Communication Disorders grant numbers R21DC015046 and R33DC015046 to San Diego State University (Dr. Carol Mackersie, Principal Investigator, and Harinath Garudadri, University of California San Diego, Co-Principal Investigator). The authors have no conflicts of interest to disclose. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received December 5, 2017; accepted April 11, 2018. Address for correspondence: Carol Mackersie, School of Speech, Language, and Hearing Sciences, San Diego State University, 5500 Campanile Drive, MC-1518, San Diego, CA 92182, USA. E-mail: cmackers@sdsu.edu Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2t1ClDZ
via IFTTT