Κυριακή 13 Μαΐου 2018

Improved Detection of Vowel Envelope Frequency Following Responses Using Hotelling’s T2 Analysis

Objectives: Objective detection of brainstem responses to natural speech stimuli is an important tool for the evaluation of hearing aid fitting, especially in people who may not be able to respond reliably in behavioral tests. Of particular interest is the envelope frequency following response (eFFR), which refers to the EEG response at the stimulus’ fundamental frequency (and its harmonics), and here in particular to the response to natural spoken vowel sounds. This article introduces the frequency-domain Hotelling’s T2 (HT2) method for eFFR detection. This method was compared, in terms of sensitivity in detecting eFFRs at the fundamental frequency (HT2_F0), to two different single-channel frequency domain methods (F test on Fourier analyzer (FA) amplitude spectra [FA-F-Test] and magnitude-squared coherence [MSC]) in detecting envelope following responses to natural vowel stimuli in simulated data and EEG data from normal-hearing subjects. Sensitivity was assessed based on the number of detections and the time needed to detect a response for a false-positive rate of 5%. The study also explored whether a single-channel, multifrequency HT2 (HT2_3F) and a multichannel, multifrequency HT2 (HT2_MC) could further improve response detection. Design: Four repeated words were presented sequentially at 70 dB SPL LAeq through ER-2 insert earphones. The stimuli consisted of a prolonged vowel in a /hVd/ structure (where V represents different vowel sounds). Each stimulus was presented over 440 sweeps (220 condensation and 220 rarefaction). EEG data were collected from 12 normal-hearing adult participants. After preprocessing and artifact removal, eFFR detection was compared between the algorithms. For the simulation study, simulated EEG signals were generated by adding random noise at multiple signal to noise ratios (SNRs; 0 to −60dB) to the auditory stimuli as well as to a single sinusoid at the fluctuating and flattened fundamental frequency (f0). For each SNR, 1000 sets of 440 simulated epochs were generated. Performance of the algorithms was assessed based on the number of sets for which a response could be detected at each SNR. Results: In simulation studies, HT2_3F significantly outperformed the other algorithms when detecting a vowel stimulus in noise. For simulations containing responses only at a single frequency, HT2_3F performs worse compared with other approaches applied in this study as the additional frequencies included do not contain additional information. For recorded EEG data, HT2_MC showed a significantly higher response detection rate compared with MSC and FA-F-Test. Both HT2_MC and HT2_F0 also showed a significant reduction in detection time compared with the FA-F-Test algorithm. Comparisons between different electrode locations confirmed a higher number of detections for electrodes close to Cz compared to more peripheral locations. Conclusion: The HT2 method is more sensitive than FA-F-Test and MSC in detecting responses to complex stimuli because it allows detection of multiple frequencies (HT2_F3) and multiple EEG channels (HT2_MC) simultaneously. This effect was shown in simulation studies for HT2_3F and in EEG data for the HT2_MC algorithm. The spread in detection time across subjects is also lower for the HT2 algorithm, with decision on the presence of an eFFR possible within 5 min. Acknowledgments: The experiments were designed by F.J.V., S.L.B. and D.M.S.; F.J.V. and M.C. performed the experiments and analyzed the data; F.J.V. wrote the article; and S.L.B. M.C., and D.M.S. provided critical revision. The authors thank Louise Goodwin for her technical support in the experimental setup. This research project is funded by the Engineering and Physical Sciences Research Council (EPSRC, grant No. EP/M026728/1). All data supporting this study are openly available from the University of Southampton repository at https://ift.tt/2jUX1tg. This study was funded by the Engineering and Physical Sciences Research Council (EPSRC), UK (Grant number: EP/M026728/1). The authors have no conflicts of interest to disclose. Address for correspondence: Steven L. Bell, Institute of Sound and Vibration Research, University of Southampton, Highfield Campus, Tizard Building 13/4015, University Road, Southampton, SO17 1BJ, United Kingdom. E-mail: S.L.Bell@soton.ac.uk Received March 23, 2017; accepted March 19, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2IeZskH
via IFTTT

Are You There for Me? Joint Engagement and Emotional Availability in Parent–Child Interactions for Toddlers With Moderate Hearing Loss

Objectives: This study examined joint engagement and emotional availability of parent–child interactions for toddlers with moderate hearing loss (MHL) compared with toddlers with normal hearing (NH) and in relation to children’s language abilities. Design: The participants in this study were 25 children with MHL (40 to 60 dB hearing loss) and 26 children with NH (mean age: 33.3 months). The children and their parents were filmed during a 10-minute free play session in their homes. The duration of joint engagement and success rate of initiations were coded next to the level of emotional availability reflected by the Emotional Availability Scales. Receptive and expressive language tests were administered to the children to examine their language ability. Results: Groups differed in joint engagement: children with MHL and their parents were less successful in establishing joint engagement and had briefer episodes of joint engagement than children with NH and their parents. No differences between groups were found for emotional availability measures. Both joint engagement and emotional availability measures were positively related to children’s language ability. Conclusions: Children with MHL and their parents are emotional available to each other. However, they have more difficulties in establishing joint engagement with each other and have briefer episodes of joint engagement compared with children with NH and their parents. The parent–child interactions of children with better language abilities are characterized with higher levels of emotional availability and longer episodes of joint engagement. The results imply that interactions of children with MHL and their parents are an important target for family-centered early intervention programs. ACKNOWLEDGMENTS: The authors thank Tinka Kriens and Elinor Hilton for their contributions in coding the parent–child interactions. The authors have no conflicts of interest to disclose. Address for correspondence: Evelien Dirks, Developmental Psychology, Leiden University, P.O. Box 9555, 2300 RB Leiden, the Netherlands. E-mail: e.dirks@fsw.leidenuniv.nl;edirks@nsdsk.nl Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received July 17, 2017; accepted March 16, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Igv8WU
via IFTTT

Effects of Additional Low-Pass–Filtered Speech on Listening Effort for Noise-Band–Vocoded Speech in Quiet and in Noise

Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The authors gratefully acknowledge Thomas Stainsby for his help and suggestions concerning this research; Filiep Vanpoucke, and three anonymous reviewers, for commenting on an earlier version of this article; and Bert Maat, Frits Leemhuis, Annemieke ter Harmsel, Matthias Haucke, and Marije Sleurink for their help seeing this project through. This research was partially funded by Cochlear Ltd, Dorhout Mees Stichting, Stichting Steun Gehoorgestoorde Kind, the Heinsius Houbolt Foundation, a Rosalind Franklin Fellowship from the University of Groningen, the Netherlands Organization for Scientific Research (Dutch: Nederlandse Organisatie voor Wetenschappelijk Onderzoek, NWO, Vidi Grant 016.096.397), and is part of the research program of the University Medical Center Groningen: Healthy Aging and Communication. Preliminary results of this study have been presented as a poster presentation at the 2nd International Conference on Cognitive Hearing Science for Communication (Linköping, Sweden, 2013) and are described in one chapter in the PhD thesis “Listening effort: The hidden costs and benefits of cochlear implants” by Carina Pals (2016). The authors have no conflicts of interest to disclose. Carina Pals is now at the Department of Psychology, University of Utah Asia Campus, Incheon Korea. Mart van Dijk is now at the Department of Work & Social Psychology, Maastricht University, Maastricht, the Netherlands. Address for correspondence: Carina Pals, E-mail: contact@carinapals.com, and Deniz Başkent, Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, P.O. Box 30.001, 9700 RB Groningen, The Netherlands. E-mail: d.baskent@umcg.nl Received October 13, 2016; accepted March 4, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2IbE0RC
via IFTTT

Tinnitus Severity Is Related to the Sound Exposure of Symphony Orchestra Musicians Independently of Hearing Impairment

Objectives: Tinnitus can be debilitating and with great impact of musicians professional and private life. The objectives of the study were therefore to: (1) describe the epidemiology of tinnitus including its severity in classical orchestra musicians, (2) investigate the association between tinnitus severity in classical musicians and their cumulative lifetime sound exposure, and (3) the association between tinnitus and hearing thresholds. Design: The study population included all musicians from five Danish symphony orchestras. Answers regarding their perception of tinnitus were received from 325 musicians, and 212 musicians were also tested with audiometry. Any tinnitus and severe tinnitus were two definitions of tinnitus used as outcomes and analyzed in relation to an estimation of the cumulative lifetime sound exposure from sound measurements and previously validated questionnaires and the average hearing threshold of 3, 4, and 6 kHz. Results: Thirty-five percentage of all musicians (31% female and 38% of male musicians) reported having experienced at least one episode of tinnitus lasting for more than 5 minutes during their life. Severe tinnitus with a severe impact on daily life was reported by 19% of the musicians (18% of female and 21% of male musicians). The severity of tinnitus was associated with increased lifetime sound exposure but not to poorer high frequency hearing thresholds when the lifetime sound exposure was considered. The odds ratio for an increase in one unit of tinnitus severity was 1.25 (95% CI, 1.12–1.40) for every 1 dB increase in lifetime sound exposure. Conclusion: Musicians frequently report tinnitus. Any tinnitus and severe tinnitus are significantly associated with the cumulative lifetime sound exposure, which was shown to be the most important factor not only for the prevalence but also for the severity of tinnitus—even in musicians without hearing loss. High-frequency hearing thresholds and tinnitus severity were correlated only if the cumulative lifetime sound exposure was excluded from the analyses. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The present study was approved by the regional ethical committee, and informed consent was given by all test subjects. Supported by Danish Working Environment Research Fund no. 20070014504, Ear Nose and Throat specialist Hans Skouby’s and Hustru Emma Skouby’s Foundation, Oto-rhino-laryngologist L. Mahler’s and N.R. Blegvads foundation for young oto-rhino-laryngologists, Region of Southern Denmark 12/7740. The authors declare no other conflict of interest. Address for correspondence: Jesper Hvass Schmidt, Department of ORL Head and Neck Surgery and Audiology, Odense University Hospital, Kløvervænget 19, Indgang 85, 3.sal, 5000 Odense C, Denmark. E-mail: jesper.schmidt@rsyd.dk Received August 5, 2016; accepted March 21, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ih40aj
via IFTTT

Improved Detection of Vowel Envelope Frequency Following Responses Using Hotelling’s T2 Analysis

Objectives: Objective detection of brainstem responses to natural speech stimuli is an important tool for the evaluation of hearing aid fitting, especially in people who may not be able to respond reliably in behavioral tests. Of particular interest is the envelope frequency following response (eFFR), which refers to the EEG response at the stimulus’ fundamental frequency (and its harmonics), and here in particular to the response to natural spoken vowel sounds. This article introduces the frequency-domain Hotelling’s T2 (HT2) method for eFFR detection. This method was compared, in terms of sensitivity in detecting eFFRs at the fundamental frequency (HT2_F0), to two different single-channel frequency domain methods (F test on Fourier analyzer (FA) amplitude spectra [FA-F-Test] and magnitude-squared coherence [MSC]) in detecting envelope following responses to natural vowel stimuli in simulated data and EEG data from normal-hearing subjects. Sensitivity was assessed based on the number of detections and the time needed to detect a response for a false-positive rate of 5%. The study also explored whether a single-channel, multifrequency HT2 (HT2_3F) and a multichannel, multifrequency HT2 (HT2_MC) could further improve response detection. Design: Four repeated words were presented sequentially at 70 dB SPL LAeq through ER-2 insert earphones. The stimuli consisted of a prolonged vowel in a /hVd/ structure (where V represents different vowel sounds). Each stimulus was presented over 440 sweeps (220 condensation and 220 rarefaction). EEG data were collected from 12 normal-hearing adult participants. After preprocessing and artifact removal, eFFR detection was compared between the algorithms. For the simulation study, simulated EEG signals were generated by adding random noise at multiple signal to noise ratios (SNRs; 0 to −60dB) to the auditory stimuli as well as to a single sinusoid at the fluctuating and flattened fundamental frequency (f0). For each SNR, 1000 sets of 440 simulated epochs were generated. Performance of the algorithms was assessed based on the number of sets for which a response could be detected at each SNR. Results: In simulation studies, HT2_3F significantly outperformed the other algorithms when detecting a vowel stimulus in noise. For simulations containing responses only at a single frequency, HT2_3F performs worse compared with other approaches applied in this study as the additional frequencies included do not contain additional information. For recorded EEG data, HT2_MC showed a significantly higher response detection rate compared with MSC and FA-F-Test. Both HT2_MC and HT2_F0 also showed a significant reduction in detection time compared with the FA-F-Test algorithm. Comparisons between different electrode locations confirmed a higher number of detections for electrodes close to Cz compared to more peripheral locations. Conclusion: The HT2 method is more sensitive than FA-F-Test and MSC in detecting responses to complex stimuli because it allows detection of multiple frequencies (HT2_F3) and multiple EEG channels (HT2_MC) simultaneously. This effect was shown in simulation studies for HT2_3F and in EEG data for the HT2_MC algorithm. The spread in detection time across subjects is also lower for the HT2 algorithm, with decision on the presence of an eFFR possible within 5 min. Acknowledgments: The experiments were designed by F.J.V., S.L.B. and D.M.S.; F.J.V. and M.C. performed the experiments and analyzed the data; F.J.V. wrote the article; and S.L.B. M.C., and D.M.S. provided critical revision. The authors thank Louise Goodwin for her technical support in the experimental setup. This research project is funded by the Engineering and Physical Sciences Research Council (EPSRC, grant No. EP/M026728/1). All data supporting this study are openly available from the University of Southampton repository at https://ift.tt/2jUX1tg. This study was funded by the Engineering and Physical Sciences Research Council (EPSRC), UK (Grant number: EP/M026728/1). The authors have no conflicts of interest to disclose. Address for correspondence: Steven L. Bell, Institute of Sound and Vibration Research, University of Southampton, Highfield Campus, Tizard Building 13/4015, University Road, Southampton, SO17 1BJ, United Kingdom. E-mail: S.L.Bell@soton.ac.uk Received March 23, 2017; accepted March 19, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2IeZskH
via IFTTT

Are You There for Me? Joint Engagement and Emotional Availability in Parent–Child Interactions for Toddlers With Moderate Hearing Loss

Objectives: This study examined joint engagement and emotional availability of parent–child interactions for toddlers with moderate hearing loss (MHL) compared with toddlers with normal hearing (NH) and in relation to children’s language abilities. Design: The participants in this study were 25 children with MHL (40 to 60 dB hearing loss) and 26 children with NH (mean age: 33.3 months). The children and their parents were filmed during a 10-minute free play session in their homes. The duration of joint engagement and success rate of initiations were coded next to the level of emotional availability reflected by the Emotional Availability Scales. Receptive and expressive language tests were administered to the children to examine their language ability. Results: Groups differed in joint engagement: children with MHL and their parents were less successful in establishing joint engagement and had briefer episodes of joint engagement than children with NH and their parents. No differences between groups were found for emotional availability measures. Both joint engagement and emotional availability measures were positively related to children’s language ability. Conclusions: Children with MHL and their parents are emotional available to each other. However, they have more difficulties in establishing joint engagement with each other and have briefer episodes of joint engagement compared with children with NH and their parents. The parent–child interactions of children with better language abilities are characterized with higher levels of emotional availability and longer episodes of joint engagement. The results imply that interactions of children with MHL and their parents are an important target for family-centered early intervention programs. ACKNOWLEDGMENTS: The authors thank Tinka Kriens and Elinor Hilton for their contributions in coding the parent–child interactions. The authors have no conflicts of interest to disclose. Address for correspondence: Evelien Dirks, Developmental Psychology, Leiden University, P.O. Box 9555, 2300 RB Leiden, the Netherlands. E-mail: e.dirks@fsw.leidenuniv.nl;edirks@nsdsk.nl Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received July 17, 2017; accepted March 16, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Igv8WU
via IFTTT

Effects of Additional Low-Pass–Filtered Speech on Listening Effort for Noise-Band–Vocoded Speech in Quiet and in Noise

Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The authors gratefully acknowledge Thomas Stainsby for his help and suggestions concerning this research; Filiep Vanpoucke, and three anonymous reviewers, for commenting on an earlier version of this article; and Bert Maat, Frits Leemhuis, Annemieke ter Harmsel, Matthias Haucke, and Marije Sleurink for their help seeing this project through. This research was partially funded by Cochlear Ltd, Dorhout Mees Stichting, Stichting Steun Gehoorgestoorde Kind, the Heinsius Houbolt Foundation, a Rosalind Franklin Fellowship from the University of Groningen, the Netherlands Organization for Scientific Research (Dutch: Nederlandse Organisatie voor Wetenschappelijk Onderzoek, NWO, Vidi Grant 016.096.397), and is part of the research program of the University Medical Center Groningen: Healthy Aging and Communication. Preliminary results of this study have been presented as a poster presentation at the 2nd International Conference on Cognitive Hearing Science for Communication (Linköping, Sweden, 2013) and are described in one chapter in the PhD thesis “Listening effort: The hidden costs and benefits of cochlear implants” by Carina Pals (2016). The authors have no conflicts of interest to disclose. Carina Pals is now at the Department of Psychology, University of Utah Asia Campus, Incheon Korea. Mart van Dijk is now at the Department of Work & Social Psychology, Maastricht University, Maastricht, the Netherlands. Address for correspondence: Carina Pals, E-mail: contact@carinapals.com, and Deniz Başkent, Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, P.O. Box 30.001, 9700 RB Groningen, The Netherlands. E-mail: d.baskent@umcg.nl Received October 13, 2016; accepted March 4, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2IbE0RC
via IFTTT

Tinnitus Severity Is Related to the Sound Exposure of Symphony Orchestra Musicians Independently of Hearing Impairment

Objectives: Tinnitus can be debilitating and with great impact of musicians professional and private life. The objectives of the study were therefore to: (1) describe the epidemiology of tinnitus including its severity in classical orchestra musicians, (2) investigate the association between tinnitus severity in classical musicians and their cumulative lifetime sound exposure, and (3) the association between tinnitus and hearing thresholds. Design: The study population included all musicians from five Danish symphony orchestras. Answers regarding their perception of tinnitus were received from 325 musicians, and 212 musicians were also tested with audiometry. Any tinnitus and severe tinnitus were two definitions of tinnitus used as outcomes and analyzed in relation to an estimation of the cumulative lifetime sound exposure from sound measurements and previously validated questionnaires and the average hearing threshold of 3, 4, and 6 kHz. Results: Thirty-five percentage of all musicians (31% female and 38% of male musicians) reported having experienced at least one episode of tinnitus lasting for more than 5 minutes during their life. Severe tinnitus with a severe impact on daily life was reported by 19% of the musicians (18% of female and 21% of male musicians). The severity of tinnitus was associated with increased lifetime sound exposure but not to poorer high frequency hearing thresholds when the lifetime sound exposure was considered. The odds ratio for an increase in one unit of tinnitus severity was 1.25 (95% CI, 1.12–1.40) for every 1 dB increase in lifetime sound exposure. Conclusion: Musicians frequently report tinnitus. Any tinnitus and severe tinnitus are significantly associated with the cumulative lifetime sound exposure, which was shown to be the most important factor not only for the prevalence but also for the severity of tinnitus—even in musicians without hearing loss. High-frequency hearing thresholds and tinnitus severity were correlated only if the cumulative lifetime sound exposure was excluded from the analyses. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The present study was approved by the regional ethical committee, and informed consent was given by all test subjects. Supported by Danish Working Environment Research Fund no. 20070014504, Ear Nose and Throat specialist Hans Skouby’s and Hustru Emma Skouby’s Foundation, Oto-rhino-laryngologist L. Mahler’s and N.R. Blegvads foundation for young oto-rhino-laryngologists, Region of Southern Denmark 12/7740. The authors declare no other conflict of interest. Address for correspondence: Jesper Hvass Schmidt, Department of ORL Head and Neck Surgery and Audiology, Odense University Hospital, Kløvervænget 19, Indgang 85, 3.sal, 5000 Odense C, Denmark. E-mail: jesper.schmidt@rsyd.dk Received August 5, 2016; accepted March 21, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ih40aj
via IFTTT

Improved Detection of Vowel Envelope Frequency Following Responses Using Hotelling’s T2 Analysis

Objectives: Objective detection of brainstem responses to natural speech stimuli is an important tool for the evaluation of hearing aid fitting, especially in people who may not be able to respond reliably in behavioral tests. Of particular interest is the envelope frequency following response (eFFR), which refers to the EEG response at the stimulus’ fundamental frequency (and its harmonics), and here in particular to the response to natural spoken vowel sounds. This article introduces the frequency-domain Hotelling’s T2 (HT2) method for eFFR detection. This method was compared, in terms of sensitivity in detecting eFFRs at the fundamental frequency (HT2_F0), to two different single-channel frequency domain methods (F test on Fourier analyzer (FA) amplitude spectra [FA-F-Test] and magnitude-squared coherence [MSC]) in detecting envelope following responses to natural vowel stimuli in simulated data and EEG data from normal-hearing subjects. Sensitivity was assessed based on the number of detections and the time needed to detect a response for a false-positive rate of 5%. The study also explored whether a single-channel, multifrequency HT2 (HT2_3F) and a multichannel, multifrequency HT2 (HT2_MC) could further improve response detection. Design: Four repeated words were presented sequentially at 70 dB SPL LAeq through ER-2 insert earphones. The stimuli consisted of a prolonged vowel in a /hVd/ structure (where V represents different vowel sounds). Each stimulus was presented over 440 sweeps (220 condensation and 220 rarefaction). EEG data were collected from 12 normal-hearing adult participants. After preprocessing and artifact removal, eFFR detection was compared between the algorithms. For the simulation study, simulated EEG signals were generated by adding random noise at multiple signal to noise ratios (SNRs; 0 to −60dB) to the auditory stimuli as well as to a single sinusoid at the fluctuating and flattened fundamental frequency (f0). For each SNR, 1000 sets of 440 simulated epochs were generated. Performance of the algorithms was assessed based on the number of sets for which a response could be detected at each SNR. Results: In simulation studies, HT2_3F significantly outperformed the other algorithms when detecting a vowel stimulus in noise. For simulations containing responses only at a single frequency, HT2_3F performs worse compared with other approaches applied in this study as the additional frequencies included do not contain additional information. For recorded EEG data, HT2_MC showed a significantly higher response detection rate compared with MSC and FA-F-Test. Both HT2_MC and HT2_F0 also showed a significant reduction in detection time compared with the FA-F-Test algorithm. Comparisons between different electrode locations confirmed a higher number of detections for electrodes close to Cz compared to more peripheral locations. Conclusion: The HT2 method is more sensitive than FA-F-Test and MSC in detecting responses to complex stimuli because it allows detection of multiple frequencies (HT2_F3) and multiple EEG channels (HT2_MC) simultaneously. This effect was shown in simulation studies for HT2_3F and in EEG data for the HT2_MC algorithm. The spread in detection time across subjects is also lower for the HT2 algorithm, with decision on the presence of an eFFR possible within 5 min. Acknowledgments: The experiments were designed by F.J.V., S.L.B. and D.M.S.; F.J.V. and M.C. performed the experiments and analyzed the data; F.J.V. wrote the article; and S.L.B. M.C., and D.M.S. provided critical revision. The authors thank Louise Goodwin for her technical support in the experimental setup. This research project is funded by the Engineering and Physical Sciences Research Council (EPSRC, grant No. EP/M026728/1). All data supporting this study are openly available from the University of Southampton repository at https://ift.tt/2jUX1tg. This study was funded by the Engineering and Physical Sciences Research Council (EPSRC), UK (Grant number: EP/M026728/1). The authors have no conflicts of interest to disclose. Address for correspondence: Steven L. Bell, Institute of Sound and Vibration Research, University of Southampton, Highfield Campus, Tizard Building 13/4015, University Road, Southampton, SO17 1BJ, United Kingdom. E-mail: S.L.Bell@soton.ac.uk Received March 23, 2017; accepted March 19, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2IeZskH
via IFTTT

Are You There for Me? Joint Engagement and Emotional Availability in Parent–Child Interactions for Toddlers With Moderate Hearing Loss

Objectives: This study examined joint engagement and emotional availability of parent–child interactions for toddlers with moderate hearing loss (MHL) compared with toddlers with normal hearing (NH) and in relation to children’s language abilities. Design: The participants in this study were 25 children with MHL (40 to 60 dB hearing loss) and 26 children with NH (mean age: 33.3 months). The children and their parents were filmed during a 10-minute free play session in their homes. The duration of joint engagement and success rate of initiations were coded next to the level of emotional availability reflected by the Emotional Availability Scales. Receptive and expressive language tests were administered to the children to examine their language ability. Results: Groups differed in joint engagement: children with MHL and their parents were less successful in establishing joint engagement and had briefer episodes of joint engagement than children with NH and their parents. No differences between groups were found for emotional availability measures. Both joint engagement and emotional availability measures were positively related to children’s language ability. Conclusions: Children with MHL and their parents are emotional available to each other. However, they have more difficulties in establishing joint engagement with each other and have briefer episodes of joint engagement compared with children with NH and their parents. The parent–child interactions of children with better language abilities are characterized with higher levels of emotional availability and longer episodes of joint engagement. The results imply that interactions of children with MHL and their parents are an important target for family-centered early intervention programs. ACKNOWLEDGMENTS: The authors thank Tinka Kriens and Elinor Hilton for their contributions in coding the parent–child interactions. The authors have no conflicts of interest to disclose. Address for correspondence: Evelien Dirks, Developmental Psychology, Leiden University, P.O. Box 9555, 2300 RB Leiden, the Netherlands. E-mail: e.dirks@fsw.leidenuniv.nl;edirks@nsdsk.nl Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). Received July 17, 2017; accepted March 16, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Igv8WU
via IFTTT

Effects of Additional Low-Pass–Filtered Speech on Listening Effort for Noise-Band–Vocoded Speech in Quiet and in Noise

Objectives: Residual acoustic hearing in electric–acoustic stimulation (EAS) can benefit cochlear implant (CI) users in increased sound quality, speech intelligibility, and improved tolerance to noise. The goal of this study was to investigate whether the low-pass–filtered acoustic speech in simulated EAS can provide the additional benefit of reducing listening effort for the spectrotemporally degraded signal of noise-band–vocoded speech. Design: Listening effort was investigated using a dual-task paradigm as a behavioral measure, and the NASA Task Load indeX as a subjective self-report measure. The primary task of the dual-task paradigm was identification of sentences presented in three experiments at three fixed intelligibility levels: at near-ceiling, 50%, and 79% intelligibility, achieved by manipulating the presence and level of speech-shaped noise in the background. Listening effort for the primary intelligibility task was reflected in the performance on the secondary, visual response time task. Experimental speech processing conditions included monaural or binaural vocoder, with added low-pass–filtered speech (to simulate EAS) or without (to simulate CI). Results: In Experiment 1, in quiet with intelligibility near-ceiling, additional low-pass–filtered speech reduced listening effort compared with binaural vocoder, in line with our expectations, although not compared with monaural vocoder. In Experiments 2 and 3, for speech in noise, added low-pass–filtered speech allowed the desired intelligibility levels to be reached at less favorable speech-to-noise ratios, as expected. It is interesting that this came without the cost of increased listening effort usually associated with poor speech-to-noise ratios; at 50% intelligibility, even a reduction in listening effort on top of the increased tolerance to noise was observed. The NASA Task Load indeX did not capture these differences. Conclusions: The dual-task results provide partial evidence for a potential decrease in listening effort as a result of adding low-frequency acoustic speech to noise-band–vocoded speech. Whether these findings translate to CI users with residual acoustic hearing will need to be addressed in future research because the quality and frequency range of low-frequency acoustic sound available to listeners with hearing loss may differ from our idealized simulations, and additional factors, such as advanced age and varying etiology, may also play a role. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The authors gratefully acknowledge Thomas Stainsby for his help and suggestions concerning this research; Filiep Vanpoucke, and three anonymous reviewers, for commenting on an earlier version of this article; and Bert Maat, Frits Leemhuis, Annemieke ter Harmsel, Matthias Haucke, and Marije Sleurink for their help seeing this project through. This research was partially funded by Cochlear Ltd, Dorhout Mees Stichting, Stichting Steun Gehoorgestoorde Kind, the Heinsius Houbolt Foundation, a Rosalind Franklin Fellowship from the University of Groningen, the Netherlands Organization for Scientific Research (Dutch: Nederlandse Organisatie voor Wetenschappelijk Onderzoek, NWO, Vidi Grant 016.096.397), and is part of the research program of the University Medical Center Groningen: Healthy Aging and Communication. Preliminary results of this study have been presented as a poster presentation at the 2nd International Conference on Cognitive Hearing Science for Communication (Linköping, Sweden, 2013) and are described in one chapter in the PhD thesis “Listening effort: The hidden costs and benefits of cochlear implants” by Carina Pals (2016). The authors have no conflicts of interest to disclose. Carina Pals is now at the Department of Psychology, University of Utah Asia Campus, Incheon Korea. Mart van Dijk is now at the Department of Work & Social Psychology, Maastricht University, Maastricht, the Netherlands. Address for correspondence: Carina Pals, E-mail: contact@carinapals.com, and Deniz Başkent, Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, P.O. Box 30.001, 9700 RB Groningen, The Netherlands. E-mail: d.baskent@umcg.nl Received October 13, 2016; accepted March 4, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2IbE0RC
via IFTTT

Tinnitus Severity Is Related to the Sound Exposure of Symphony Orchestra Musicians Independently of Hearing Impairment

Objectives: Tinnitus can be debilitating and with great impact of musicians professional and private life. The objectives of the study were therefore to: (1) describe the epidemiology of tinnitus including its severity in classical orchestra musicians, (2) investigate the association between tinnitus severity in classical musicians and their cumulative lifetime sound exposure, and (3) the association between tinnitus and hearing thresholds. Design: The study population included all musicians from five Danish symphony orchestras. Answers regarding their perception of tinnitus were received from 325 musicians, and 212 musicians were also tested with audiometry. Any tinnitus and severe tinnitus were two definitions of tinnitus used as outcomes and analyzed in relation to an estimation of the cumulative lifetime sound exposure from sound measurements and previously validated questionnaires and the average hearing threshold of 3, 4, and 6 kHz. Results: Thirty-five percentage of all musicians (31% female and 38% of male musicians) reported having experienced at least one episode of tinnitus lasting for more than 5 minutes during their life. Severe tinnitus with a severe impact on daily life was reported by 19% of the musicians (18% of female and 21% of male musicians). The severity of tinnitus was associated with increased lifetime sound exposure but not to poorer high frequency hearing thresholds when the lifetime sound exposure was considered. The odds ratio for an increase in one unit of tinnitus severity was 1.25 (95% CI, 1.12–1.40) for every 1 dB increase in lifetime sound exposure. Conclusion: Musicians frequently report tinnitus. Any tinnitus and severe tinnitus are significantly associated with the cumulative lifetime sound exposure, which was shown to be the most important factor not only for the prevalence but also for the severity of tinnitus—even in musicians without hearing loss. High-frequency hearing thresholds and tinnitus severity were correlated only if the cumulative lifetime sound exposure was excluded from the analyses. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. ACKNOWLEDGMENTS: The present study was approved by the regional ethical committee, and informed consent was given by all test subjects. Supported by Danish Working Environment Research Fund no. 20070014504, Ear Nose and Throat specialist Hans Skouby’s and Hustru Emma Skouby’s Foundation, Oto-rhino-laryngologist L. Mahler’s and N.R. Blegvads foundation for young oto-rhino-laryngologists, Region of Southern Denmark 12/7740. The authors declare no other conflict of interest. Address for correspondence: Jesper Hvass Schmidt, Department of ORL Head and Neck Surgery and Audiology, Odense University Hospital, Kløvervænget 19, Indgang 85, 3.sal, 5000 Odense C, Denmark. E-mail: jesper.schmidt@rsyd.dk Received August 5, 2016; accepted March 21, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ih40aj
via IFTTT

A novel pathogenic variant in the MARVELD2 gene causes autosomal recessive non-syndromic hearing loss in an Iranian family.

Related Articles

A novel pathogenic variant in the MARVELD2 gene causes autosomal recessive non-syndromic hearing loss in an Iranian family.

Genomics. 2018 May 09;:

Authors: Taghipour-Sheshdeh A, Nemati-Zargaran F, Zarepour N, Tahmasebi P, Saki N, Tabatabaiefar MA, Mohammadi-Asl J, Hashemzadeh-Chaleshtori M

Abstract
BACKGROUND AND AIMS: Hearing loss (HL) is the most common sensorineural disorder and one of the most common human defects. HL can be classified according to main criteria, including: the site (conductive, sensorineural and mixed), onset (pre-lingual and post-lingual), accompanying signs and symptoms (syndromic and non-syndromic), severity (mild, moderate, severe and profound) and mode of inheritance (Autosomal recessive, autosomal dominant, X-linked and mitochondrial). Autosomal recessive non-syndromic HL (ARNSHL) forms constitute a major share of the HL cases. In the present study, next-generation sequencing (NGS) was applied to investigate the underlying etiology of HL in a multiplex ARNSHL family from Khuzestan province, southwest Iran.
METHODS: In this descriptive study, 20 multiplex ARNSHL families from Khuzestan province, southwest of Iran were recruited. After DNA extraction, genetic linkage analysis (GLA) was applied to screen for a panel of more prevalent loci. One family, which was not linked to these loci, was subjected to Otogenetics deafness Next Generation Sequencing (NGS) panel.
RESULTS: NGS results showed a novel deletion-insertion variant (c.1555delinsAA) in the MARVELD2 gene. The variant which is a frameshift in the seventh exon of the MARVELD2 gene fulfills the criteria of being categorized as pathogenic according to the American College of Medical Genetics and Genomics (ACMG) guideline.
CONCLUSION: NGS is very promising to identify the molecular etiology of highly heterogeneous diseases such as HL. MARVELD2 might be important in the etiology of HL in this region of Iran.

PMID: 29752989 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2IzDQmC
via IFTTT