Κυριακή 19 Αυγούστου 2018

Speech Auditory Brainstem Responses: Effects of Background, Stimulus Duration, Consonant–Vowel, and Number of Epochs

Objectives: The aims of this study were to systematically explore the effects of stimulus duration, background (quiet versus noise), and three consonant–vowels on speech-auditory brainstem responses (ABRs). Additionally, the minimum number of epochs required to record speech-ABRs with clearly identifiable waveform components was assessed. The purpose was to evaluate whether shorter duration stimuli could be reliably used to record speech-ABRs both in quiet and in background noise to the three consonant–vowels, as opposed to longer duration stimuli that are commonly used in the literature. Shorter duration stimuli and a smaller number of epochs would require shorter test sessions and thus encourage the transition of the speech-ABR from research to clinical practice. Design: Speech-ABRs in response to 40 msec [da], 50 msec [ba] [da] [ga], and 170 msec [ba] [da] [ga] stimuli were collected from 12 normal-hearing adults with confirmed normal click-ABRs. Monaural (right-ear) speech-ABRs were recorded to all stimuli in quiet and to 40 msec [da], 50 msec [ba] [da] [ga], and 170 msec [da] in a background of two-talker babble at +10 dB signal to noise ratio using a 2-channel electrode montage (Cz-Active, A1 and A2-reference, Fz-ground). Twelve thousand epochs (6000 per polarity) were collected for each stimulus and background from all participants. Latencies and amplitudes of speech-ABR peaks (V, A, D, E, F, O) were compared across backgrounds (quiet and noise) for all stimulus durations, across stimulus durations (50 and 170 msec) and across consonant–vowels ([ba], [da], and [ga]). Additionally, degree of phase locking to the stimulus fundamental frequency (in quiet versus noise) was evaluated for the frequency following response in speech-ABRs to the 170 msec [da]. Finally, the number of epochs required for a robust response was evaluated using Fsp statistic and bootstrap analysis at different epoch iterations. Results: Background effect: the addition of background noise resulted in speech-ABRs with longer peak latencies and smaller peak amplitudes compared with speech-ABRs in quiet, irrespective of stimulus duration. However, there was no effect of background noise on the degree of phase locking of the frequency following response to the stimulus fundamental frequency in speech-ABRs to the 170 msec [da]. Duration effect: speech-ABR peak latencies and amplitudes did not differ in response to the 50 and 170 msec stimuli. Consonant–vowel effect: different consonant–vowels did not have an effect on speech-ABR peak latencies regardless of stimulus duration. Number of epochs: a larger number of epochs was required to record speech-ABRs in noise compared with in quiet, and a smaller number of epochs was required to record speech-ABRs to the 40 msec [da] compared with the 170 msec [da]. Conclusions: This is the first study that systematically investigated the clinical feasibility of speech-ABRs in terms of stimulus duration, background noise, and number of epochs. Speech-ABRs can be reliably recorded to the 40 msec [da] without compromising response quality even when presented in background noise. Because fewer epochs were needed for the 40 msec [da], this would be the optimal stimulus for clinical use. Finally, given that there was no effect of consonant–vowel on speech-ABR peak latencies, there is no evidence that speech-ABRs are suitable for assessing auditory discrimination of the stimuli used. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially without permission from the journal. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Dr Timothy Wilding, Dr Emanuele Perugia, and Dr Frederic Marmel at the Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre for their help in writing the MATLAB code for some of the data processing. The authors also thank the Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, IL, USA for the provision of stimuli (consonant vowels and background babble) used in this study. G.B. designed and performed the experiment, analyzed the data, and wrote the paper; A.L. was involved in experiment design and interpretation of results; S.L.B. and G.P. were involved in data processing, MATLAB coding, and reviewed results; M.O. was involved in study setup and reviewed results; K.K. was involved in experiment design, data analyses, and interpretation of results. All authors discussed results and commented on the manuscript at all stages. This research was funded by the Saudi Arabian Ministry of Education and King Fahad Medical City (to G.B.) and by the Engineering and Physical Sciences Research Council grant EP/M026728/1 (to K.K. and S.L.B.). Portions of this article were previously presented at the XXV International Evoked Response Audiometry Study Group Biennial Symposium, Warsaw, Poland, May 22, 2017; at the 40th MidWinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, USA, February 12, 2017; and at the Basic Auditory Science Meeting, Cambridge, United Kingdom, September 5, 2016. Raw EEG data (speech-ABRs) for this study may be accessed at (https://ift.tt/2MpP0wP). The authors have no conflicts of interest to declare. Address for correspondence: Ghada BinKhamis, Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology Medicine and Health, Room A3.08, Ellen Wilkinson Building, University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom. E-mail: ghada.binkhamis@manchester.ac.uk or Karolina Kluk, Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology Medicine and Health, Room B2.15, Ellen Wilkinson Building, University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom. E-mail: Karolina.Kluk@manchester.ac.uk Received June 30, 2017; accepted June 25, 2018 Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2BpUhPU
via IFTTT

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου