Πέμπτη 21 Ιουλίου 2016

Psychometric Functions of Dual-Task Paradigms for Measuring Listening Effort.

Objectives: The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR). Design: Younger adults with normal hearing (YNH, n = 24; experiment 1) and older adults with hearing impairment (n = 24; experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. Results: For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30 to 50% of the speech. In experiments 1 and 2, the dual-task trials that had the same SNR were conducted in one block. To determine if the peak shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., nonblocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peak shape. Conclusions: Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and older adults with hearing impairment participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or nonblocked). The shorter RT at the unfavorable SNRs (speech intelligibility

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVrAE
via IFTTT

Phonological Priming in Children with Hearing Loss: Effect of Speech Mode, Fidelity, and Lexical Status.

Objectives: This research determined (1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] versus auditory), fidelity (intact versus nonintact auditory onsets), and lexical status (words versus nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) versus children with normal hearing (CNH) and (2) how the degree of HI, auditory word recognition, and age influenced results in CHI. Note that the AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a nonintact onset/rhyme in the auditory track. Example stimuli for the word bag are (1) AV: intact visual (b/ag) coupled to nonintact auditory (-b/ag) and 2) auditory: static face coupled to the same nonintact auditory (-b/ag). The question was whether the intact visual speech would "restore or fill-in" the nonintact auditory speech in which case performance for the same auditory stimulus would differ depending on the presence/absence of visual speech. Design: Participants were 62 CHI and 62 CNH whose ages had a group mean and group distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: (1) spoke English as a native language, (2) communicated successfully aurally/orally, and (3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multimodal picture word task. Results: Both CHI and CNH showed greater phonological priming from high than low-fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI versus CNH-thus these CHI appeared to have sufficiently well-specified phonological onset representations to support priming, and visual speech did not appear to be a disproportionately important source of the CHI's phonological knowledge. Two exceptions occurred, however. First-with regard to lexical status-both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high- versus low-fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition-but not degree of HI or age-uniquely influenced phonological priming by the AV nonwords. Conclusions: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, this research documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2acmM3F
via IFTTT

Hearing and Vestibular Function After Preoperative Intratympanic Gentamicin Therapy for Vestibular Schwanomma as Part of Vestibular Prehab.

Objective: To evaluate auditory and vestibular function after presurgical treatment with gentamicin in schwannoma patients. Background: The vestibular PREHAB protocol aims at diminishing the remaining vestibular function before vestibular schwannoma surgery, to ensure less acute symptoms from surgery, and initiate a more efficient vestibular rehabilitation already before surgery. However, the potential cochleotoxicity of gentamicin is a concern, since modern schwannoma surgery strives to preserve hearing. Study design: Retrospective study. Setting: University hospital. Patients: Seventeen patients diagnosed with vestibular schwannoma between 2004 and 2011, and took part in vestibular PREHAB program. The patients were of age 21 to 66 years (mean 48.8), 9 females and 8 males. Intervention: Intratympanic gentamicin installations before surgery as part of the vestibular PREHAB. Main outcome measures: Hearing thresholds, word recognition score, caloric response, subjective visual vertical and horizontal, cVEMP, and vestibular impulse tests. Results: Combined analysis of frequency and hearing threshold showed a significant decrease after gentamicin therapy (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVDQA
via IFTTT

Tinnitus and Sleep Difficulties After Cochlear Implantation.

Objectives: To estimate and compare the prevalence of and associations between tinnitus and sleep difficulties in a sample of UK adult cochlear implant users and those identified as potential candidates for cochlear implantation. Design: The study was conducted using the UK Biobank resource, a population-based cohort of 40- to 69-year olds. Self-report data on hearing, tinnitus, sleep difficulties, and demographic variables were collected from cochlear implant users (n = 194) and individuals identified as potential candidates for cochlear implantation (n = 211). These "candidates" were selected based on (i) impaired hearing sensitivity, inferred from self-reported hearing aid use and (ii) impaired hearing function, inferred from an inability to report words accurately at negative signal to noise ratios on an unaided closed-set test of speech perception. Data on tinnitus (presence, persistence, and related distress) and on sleep difficulties were analyzed using logistic regression models controlling for gender, age, deprivation, and neuroticism. Results: The prevalence of tinnitus was similar among implant users (50%) and candidates (52%; p = 0.39). However, implant users were less likely to report that their tinnitus was distressing at its worst (41%) compared with candidates (63%; p = 0.02). The logistic regression model suggested that this difference between the two groups could be explained by the fact that tinnitus was less persistent in implant users (46%) compared with candidates (72%; p

from #Audiology via ola Kala on Inoreader http://ift.tt/2acmqKb
via IFTTT

Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

Objectives: The purpose of this study was to develop the word auditory recognition and recall measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. Design: The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Results: Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. Conclusions: These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeW6SV
via IFTTT

Multisite Randomized Controlled Trial to Compare Two Methods of Tinnitus Intervention to Two Control Conditions.

Objectives: In this four-site clinical trial, we evaluated whether tinnitus masking (TM) and tinnitus retraining therapy (TRT) decreased tinnitus severity more than the two control groups: an attention-control group that received tinnitus educational counseling (and hearing aids if needed; TED), and a 6-month-wait-list control (WLC) group. The authors hypothesized that, over the first 6 months of treatment, TM and TRT would decrease tinnitus severity in Veterans relative to TED and WLC, and that TED would decrease tinnitus severity relative to WLC. The authors also hypothesized that, over 18 months of treatment, TM and TRT would decrease tinnitus severity relative to TED. Treatment effectiveness was hypothesized not to be different across the four sites. Design: Across four Veterans affairs medical center sites, N = 148 qualifying Veterans who experienced sufficiently bothersome tinnitus were randomized into one of the four groups. The 115 Veterans assigned to TM (n = 42), TRT (n = 34), and TED (n = 39) were considered immediate-treatment subjects; they received comparable time and attention from audiologists. The 33 Veterans assigned to WLC were, after 6 months, randomized to receive delayed treatment in TM, TRT, or TED. Assessment of outcomes took place using the tinnitus handicap inventory (THI) at 0, 3, 6, 12, and 18 months. Results: Results of a repeated measures analysis of variance using an intention-to-treat approach showed that the tinnitus severity of Veterans receiving TM, TRT, and TED significantly decreased (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2acm9at
via IFTTT

Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

Objectives: This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design: Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Results: Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions: When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVr3C
via IFTTT

Pediatric Hearing Aid Management: Parent-Reported Needs for Learning Support.

Objectives: The aim of this study was to investigate parent learning and support needs related to hearing aid management for young children, and factors that influence parent-reported hours of hearing aid use. Design: A cross-sectional survey design was used to collect survey data in seven states. The child's primary caregiver completed a demographic form, a questionnaire to explore parent learning and support needs as well as their challenges with hearing aid use, and the patient health questionnaire to identify symptoms of depression. Three hundred and eighteen parents completed the questionnaires. Results: Responses were analyzed for 318 parents of children (M = 23.15 months; SD = 10.43; range: 3 to 51) who had been wearing hearing aids (M = 15.52; SD = 10.11; range:

from #Audiology via ola Kala on Inoreader http://ift.tt/2acmF8m
via IFTTT

Effects of Self-Generated Noise on Estimates of Detection Threshold in Quiet for School-Age Children and Adults.

Objectives: Detection thresholds in quiet become adult-like earlier in childhood for high than low frequencies. When adults listen for sounds near threshold, they tend to engage in behaviors that reduce physiologic noise (e.g., quiet breathing), which is predominantly low frequency. Children may not suppress self-generated noise to the same extent as adults, such that low-frequency self-generated noise elevates thresholds in the associated frequency regions. This possibility was evaluated by measuring noise levels in the ear canal simultaneous with adaptive threshold estimation. Design: Listeners were normal-hearing children (4.3 to 16.0 years) and adults. Detection thresholds were measured adaptively for 250-, 1000-, and 4000-Hz pure tones using a three-alternative forced-choice procedure. Recordings of noise in the ear canal were made while the listeners performed this task, with the earphone and microphone routed through a single foam insert. Levels of self-generated noise were computed in octave-wide bands. Age effects were evaluated for four groups: 4- to 6-year olds, 7- to 10-year olds, 11- to 16-year olds, and adults. Results: Consistent with previous data, the effect of child age on thresholds was robust at 250 Hz and fell off at higher frequencies; thresholds of even the youngest listeners were similar to adults' at 4000 Hz. Self-generated noise had a similar low-pass spectral shape for all age groups, although the magnitude of self-generated noise was higher in younger listeners. If self-generated noise impairs detection, then noise levels should be higher for trials associated with the wrong answer than the right answer. This association was observed for all listener groups at the 250-Hz signal frequency. For adults and older children, this association was limited to the noise band centered on the 250-Hz signal. For the two younger groups of children, this association was strongest at the signal frequency, but extended to bands spectrally remote from the 250-Hz signal. For the 1000-Hz signal frequency, there was a broadly tuned association between noise and response only for the two younger groups of children. For the 4000-Hz signal frequency, only the youngest group of children demonstrated an association between responses and noise levels, and this association was particularly pronounced for bands below the signal frequency. Conclusions: These results provide evidence that self-generated noise plays a role in the prolonged development of low-frequency detection thresholds in quiet. Some aspects of the results are consistent with the possibility that self-generated noise elevates thresholds via energetic masking, particularly at 250 Hz. The association between behavioral responses and noise spectrally remote from the signal frequency is also consistent with the idea that self-generated noise may also reflect contributions of more central factors (e.g., inattention to the task). Evaluation of self-generated noise could improve diagnosis of minimal or mild hearing loss. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVV9U
via IFTTT

Psychometric Functions of Dual-Task Paradigms for Measuring Listening Effort.

Objectives: The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR). Design: Younger adults with normal hearing (YNH, n = 24; experiment 1) and older adults with hearing impairment (n = 24; experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. Results: For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30 to 50% of the speech. In experiments 1 and 2, the dual-task trials that had the same SNR were conducted in one block. To determine if the peak shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., nonblocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peak shape. Conclusions: Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and older adults with hearing impairment participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or nonblocked). The shorter RT at the unfavorable SNRs (speech intelligibility

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVrAE
via IFTTT

Phonological Priming in Children with Hearing Loss: Effect of Speech Mode, Fidelity, and Lexical Status.

Objectives: This research determined (1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] versus auditory), fidelity (intact versus nonintact auditory onsets), and lexical status (words versus nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) versus children with normal hearing (CNH) and (2) how the degree of HI, auditory word recognition, and age influenced results in CHI. Note that the AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a nonintact onset/rhyme in the auditory track. Example stimuli for the word bag are (1) AV: intact visual (b/ag) coupled to nonintact auditory (-b/ag) and 2) auditory: static face coupled to the same nonintact auditory (-b/ag). The question was whether the intact visual speech would "restore or fill-in" the nonintact auditory speech in which case performance for the same auditory stimulus would differ depending on the presence/absence of visual speech. Design: Participants were 62 CHI and 62 CNH whose ages had a group mean and group distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: (1) spoke English as a native language, (2) communicated successfully aurally/orally, and (3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multimodal picture word task. Results: Both CHI and CNH showed greater phonological priming from high than low-fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI versus CNH-thus these CHI appeared to have sufficiently well-specified phonological onset representations to support priming, and visual speech did not appear to be a disproportionately important source of the CHI's phonological knowledge. Two exceptions occurred, however. First-with regard to lexical status-both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high- versus low-fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition-but not degree of HI or age-uniquely influenced phonological priming by the AV nonwords. Conclusions: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, this research documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2acmM3F
via IFTTT

Hearing and Vestibular Function After Preoperative Intratympanic Gentamicin Therapy for Vestibular Schwanomma as Part of Vestibular Prehab.

Objective: To evaluate auditory and vestibular function after presurgical treatment with gentamicin in schwannoma patients. Background: The vestibular PREHAB protocol aims at diminishing the remaining vestibular function before vestibular schwannoma surgery, to ensure less acute symptoms from surgery, and initiate a more efficient vestibular rehabilitation already before surgery. However, the potential cochleotoxicity of gentamicin is a concern, since modern schwannoma surgery strives to preserve hearing. Study design: Retrospective study. Setting: University hospital. Patients: Seventeen patients diagnosed with vestibular schwannoma between 2004 and 2011, and took part in vestibular PREHAB program. The patients were of age 21 to 66 years (mean 48.8), 9 females and 8 males. Intervention: Intratympanic gentamicin installations before surgery as part of the vestibular PREHAB. Main outcome measures: Hearing thresholds, word recognition score, caloric response, subjective visual vertical and horizontal, cVEMP, and vestibular impulse tests. Results: Combined analysis of frequency and hearing threshold showed a significant decrease after gentamicin therapy (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVDQA
via IFTTT

Tinnitus and Sleep Difficulties After Cochlear Implantation.

Objectives: To estimate and compare the prevalence of and associations between tinnitus and sleep difficulties in a sample of UK adult cochlear implant users and those identified as potential candidates for cochlear implantation. Design: The study was conducted using the UK Biobank resource, a population-based cohort of 40- to 69-year olds. Self-report data on hearing, tinnitus, sleep difficulties, and demographic variables were collected from cochlear implant users (n = 194) and individuals identified as potential candidates for cochlear implantation (n = 211). These "candidates" were selected based on (i) impaired hearing sensitivity, inferred from self-reported hearing aid use and (ii) impaired hearing function, inferred from an inability to report words accurately at negative signal to noise ratios on an unaided closed-set test of speech perception. Data on tinnitus (presence, persistence, and related distress) and on sleep difficulties were analyzed using logistic regression models controlling for gender, age, deprivation, and neuroticism. Results: The prevalence of tinnitus was similar among implant users (50%) and candidates (52%; p = 0.39). However, implant users were less likely to report that their tinnitus was distressing at its worst (41%) compared with candidates (63%; p = 0.02). The logistic regression model suggested that this difference between the two groups could be explained by the fact that tinnitus was less persistent in implant users (46%) compared with candidates (72%; p

from #Audiology via ola Kala on Inoreader http://ift.tt/2acmqKb
via IFTTT

Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

Objectives: The purpose of this study was to develop the word auditory recognition and recall measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. Design: The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Results: Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. Conclusions: These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeW6SV
via IFTTT

Multisite Randomized Controlled Trial to Compare Two Methods of Tinnitus Intervention to Two Control Conditions.

Objectives: In this four-site clinical trial, we evaluated whether tinnitus masking (TM) and tinnitus retraining therapy (TRT) decreased tinnitus severity more than the two control groups: an attention-control group that received tinnitus educational counseling (and hearing aids if needed; TED), and a 6-month-wait-list control (WLC) group. The authors hypothesized that, over the first 6 months of treatment, TM and TRT would decrease tinnitus severity in Veterans relative to TED and WLC, and that TED would decrease tinnitus severity relative to WLC. The authors also hypothesized that, over 18 months of treatment, TM and TRT would decrease tinnitus severity relative to TED. Treatment effectiveness was hypothesized not to be different across the four sites. Design: Across four Veterans affairs medical center sites, N = 148 qualifying Veterans who experienced sufficiently bothersome tinnitus were randomized into one of the four groups. The 115 Veterans assigned to TM (n = 42), TRT (n = 34), and TED (n = 39) were considered immediate-treatment subjects; they received comparable time and attention from audiologists. The 33 Veterans assigned to WLC were, after 6 months, randomized to receive delayed treatment in TM, TRT, or TED. Assessment of outcomes took place using the tinnitus handicap inventory (THI) at 0, 3, 6, 12, and 18 months. Results: Results of a repeated measures analysis of variance using an intention-to-treat approach showed that the tinnitus severity of Veterans receiving TM, TRT, and TED significantly decreased (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2acm9at
via IFTTT

Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

Objectives: This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design: Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Results: Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions: When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVr3C
via IFTTT

Pediatric Hearing Aid Management: Parent-Reported Needs for Learning Support.

Objectives: The aim of this study was to investigate parent learning and support needs related to hearing aid management for young children, and factors that influence parent-reported hours of hearing aid use. Design: A cross-sectional survey design was used to collect survey data in seven states. The child's primary caregiver completed a demographic form, a questionnaire to explore parent learning and support needs as well as their challenges with hearing aid use, and the patient health questionnaire to identify symptoms of depression. Three hundred and eighteen parents completed the questionnaires. Results: Responses were analyzed for 318 parents of children (M = 23.15 months; SD = 10.43; range: 3 to 51) who had been wearing hearing aids (M = 15.52; SD = 10.11; range:

from #Audiology via ola Kala on Inoreader http://ift.tt/2acmF8m
via IFTTT

Effects of Self-Generated Noise on Estimates of Detection Threshold in Quiet for School-Age Children and Adults.

Objectives: Detection thresholds in quiet become adult-like earlier in childhood for high than low frequencies. When adults listen for sounds near threshold, they tend to engage in behaviors that reduce physiologic noise (e.g., quiet breathing), which is predominantly low frequency. Children may not suppress self-generated noise to the same extent as adults, such that low-frequency self-generated noise elevates thresholds in the associated frequency regions. This possibility was evaluated by measuring noise levels in the ear canal simultaneous with adaptive threshold estimation. Design: Listeners were normal-hearing children (4.3 to 16.0 years) and adults. Detection thresholds were measured adaptively for 250-, 1000-, and 4000-Hz pure tones using a three-alternative forced-choice procedure. Recordings of noise in the ear canal were made while the listeners performed this task, with the earphone and microphone routed through a single foam insert. Levels of self-generated noise were computed in octave-wide bands. Age effects were evaluated for four groups: 4- to 6-year olds, 7- to 10-year olds, 11- to 16-year olds, and adults. Results: Consistent with previous data, the effect of child age on thresholds was robust at 250 Hz and fell off at higher frequencies; thresholds of even the youngest listeners were similar to adults' at 4000 Hz. Self-generated noise had a similar low-pass spectral shape for all age groups, although the magnitude of self-generated noise was higher in younger listeners. If self-generated noise impairs detection, then noise levels should be higher for trials associated with the wrong answer than the right answer. This association was observed for all listener groups at the 250-Hz signal frequency. For adults and older children, this association was limited to the noise band centered on the 250-Hz signal. For the two younger groups of children, this association was strongest at the signal frequency, but extended to bands spectrally remote from the 250-Hz signal. For the 1000-Hz signal frequency, there was a broadly tuned association between noise and response only for the two younger groups of children. For the 4000-Hz signal frequency, only the youngest group of children demonstrated an association between responses and noise levels, and this association was particularly pronounced for bands below the signal frequency. Conclusions: These results provide evidence that self-generated noise plays a role in the prolonged development of low-frequency detection thresholds in quiet. Some aspects of the results are consistent with the possibility that self-generated noise elevates thresholds via energetic masking, particularly at 250 Hz. The association between behavioral responses and noise spectrally remote from the signal frequency is also consistent with the idea that self-generated noise may also reflect contributions of more central factors (e.g., inattention to the task). Evaluation of self-generated noise could improve diagnosis of minimal or mild hearing loss. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aeVV9U
via IFTTT

Psychometric Functions of Dual-Task Paradigms for Measuring Listening Effort.

Objectives: The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR). Design: Younger adults with normal hearing (YNH, n = 24; experiment 1) and older adults with hearing impairment (n = 24; experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. Results: For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30 to 50% of the speech. In experiments 1 and 2, the dual-task trials that had the same SNR were conducted in one block. To determine if the peak shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., nonblocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peak shape. Conclusions: Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and older adults with hearing impairment participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or nonblocked). The shorter RT at the unfavorable SNRs (speech intelligibility

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2aeVrAE
via IFTTT

Phonological Priming in Children with Hearing Loss: Effect of Speech Mode, Fidelity, and Lexical Status.

Objectives: This research determined (1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] versus auditory), fidelity (intact versus nonintact auditory onsets), and lexical status (words versus nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) versus children with normal hearing (CNH) and (2) how the degree of HI, auditory word recognition, and age influenced results in CHI. Note that the AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a nonintact onset/rhyme in the auditory track. Example stimuli for the word bag are (1) AV: intact visual (b/ag) coupled to nonintact auditory (-b/ag) and 2) auditory: static face coupled to the same nonintact auditory (-b/ag). The question was whether the intact visual speech would "restore or fill-in" the nonintact auditory speech in which case performance for the same auditory stimulus would differ depending on the presence/absence of visual speech. Design: Participants were 62 CHI and 62 CNH whose ages had a group mean and group distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: (1) spoke English as a native language, (2) communicated successfully aurally/orally, and (3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multimodal picture word task. Results: Both CHI and CNH showed greater phonological priming from high than low-fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI versus CNH-thus these CHI appeared to have sufficiently well-specified phonological onset representations to support priming, and visual speech did not appear to be a disproportionately important source of the CHI's phonological knowledge. Two exceptions occurred, however. First-with regard to lexical status-both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high- versus low-fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition-but not degree of HI or age-uniquely influenced phonological priming by the AV nonwords. Conclusions: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, this research documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2acmM3F
via IFTTT

Hearing and Vestibular Function After Preoperative Intratympanic Gentamicin Therapy for Vestibular Schwanomma as Part of Vestibular Prehab.

Objective: To evaluate auditory and vestibular function after presurgical treatment with gentamicin in schwannoma patients. Background: The vestibular PREHAB protocol aims at diminishing the remaining vestibular function before vestibular schwannoma surgery, to ensure less acute symptoms from surgery, and initiate a more efficient vestibular rehabilitation already before surgery. However, the potential cochleotoxicity of gentamicin is a concern, since modern schwannoma surgery strives to preserve hearing. Study design: Retrospective study. Setting: University hospital. Patients: Seventeen patients diagnosed with vestibular schwannoma between 2004 and 2011, and took part in vestibular PREHAB program. The patients were of age 21 to 66 years (mean 48.8), 9 females and 8 males. Intervention: Intratympanic gentamicin installations before surgery as part of the vestibular PREHAB. Main outcome measures: Hearing thresholds, word recognition score, caloric response, subjective visual vertical and horizontal, cVEMP, and vestibular impulse tests. Results: Combined analysis of frequency and hearing threshold showed a significant decrease after gentamicin therapy (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2aeVDQA
via IFTTT

Tinnitus and Sleep Difficulties After Cochlear Implantation.

Objectives: To estimate and compare the prevalence of and associations between tinnitus and sleep difficulties in a sample of UK adult cochlear implant users and those identified as potential candidates for cochlear implantation. Design: The study was conducted using the UK Biobank resource, a population-based cohort of 40- to 69-year olds. Self-report data on hearing, tinnitus, sleep difficulties, and demographic variables were collected from cochlear implant users (n = 194) and individuals identified as potential candidates for cochlear implantation (n = 211). These "candidates" were selected based on (i) impaired hearing sensitivity, inferred from self-reported hearing aid use and (ii) impaired hearing function, inferred from an inability to report words accurately at negative signal to noise ratios on an unaided closed-set test of speech perception. Data on tinnitus (presence, persistence, and related distress) and on sleep difficulties were analyzed using logistic regression models controlling for gender, age, deprivation, and neuroticism. Results: The prevalence of tinnitus was similar among implant users (50%) and candidates (52%; p = 0.39). However, implant users were less likely to report that their tinnitus was distressing at its worst (41%) compared with candidates (63%; p = 0.02). The logistic regression model suggested that this difference between the two groups could be explained by the fact that tinnitus was less persistent in implant users (46%) compared with candidates (72%; p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2acmqKb
via IFTTT

Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

Objectives: The purpose of this study was to develop the word auditory recognition and recall measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. Design: The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Results: Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. Conclusions: These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2aeW6SV
via IFTTT

Multisite Randomized Controlled Trial to Compare Two Methods of Tinnitus Intervention to Two Control Conditions.

Objectives: In this four-site clinical trial, we evaluated whether tinnitus masking (TM) and tinnitus retraining therapy (TRT) decreased tinnitus severity more than the two control groups: an attention-control group that received tinnitus educational counseling (and hearing aids if needed; TED), and a 6-month-wait-list control (WLC) group. The authors hypothesized that, over the first 6 months of treatment, TM and TRT would decrease tinnitus severity in Veterans relative to TED and WLC, and that TED would decrease tinnitus severity relative to WLC. The authors also hypothesized that, over 18 months of treatment, TM and TRT would decrease tinnitus severity relative to TED. Treatment effectiveness was hypothesized not to be different across the four sites. Design: Across four Veterans affairs medical center sites, N = 148 qualifying Veterans who experienced sufficiently bothersome tinnitus were randomized into one of the four groups. The 115 Veterans assigned to TM (n = 42), TRT (n = 34), and TED (n = 39) were considered immediate-treatment subjects; they received comparable time and attention from audiologists. The 33 Veterans assigned to WLC were, after 6 months, randomized to receive delayed treatment in TM, TRT, or TED. Assessment of outcomes took place using the tinnitus handicap inventory (THI) at 0, 3, 6, 12, and 18 months. Results: Results of a repeated measures analysis of variance using an intention-to-treat approach showed that the tinnitus severity of Veterans receiving TM, TRT, and TED significantly decreased (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2acm9at
via IFTTT

Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

Objectives: This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design: Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Results: Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions: When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2aeVr3C
via IFTTT

Pediatric Hearing Aid Management: Parent-Reported Needs for Learning Support.

Objectives: The aim of this study was to investigate parent learning and support needs related to hearing aid management for young children, and factors that influence parent-reported hours of hearing aid use. Design: A cross-sectional survey design was used to collect survey data in seven states. The child's primary caregiver completed a demographic form, a questionnaire to explore parent learning and support needs as well as their challenges with hearing aid use, and the patient health questionnaire to identify symptoms of depression. Three hundred and eighteen parents completed the questionnaires. Results: Responses were analyzed for 318 parents of children (M = 23.15 months; SD = 10.43; range: 3 to 51) who had been wearing hearing aids (M = 15.52; SD = 10.11; range:

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2acmF8m
via IFTTT

Effects of Self-Generated Noise on Estimates of Detection Threshold in Quiet for School-Age Children and Adults.

Objectives: Detection thresholds in quiet become adult-like earlier in childhood for high than low frequencies. When adults listen for sounds near threshold, they tend to engage in behaviors that reduce physiologic noise (e.g., quiet breathing), which is predominantly low frequency. Children may not suppress self-generated noise to the same extent as adults, such that low-frequency self-generated noise elevates thresholds in the associated frequency regions. This possibility was evaluated by measuring noise levels in the ear canal simultaneous with adaptive threshold estimation. Design: Listeners were normal-hearing children (4.3 to 16.0 years) and adults. Detection thresholds were measured adaptively for 250-, 1000-, and 4000-Hz pure tones using a three-alternative forced-choice procedure. Recordings of noise in the ear canal were made while the listeners performed this task, with the earphone and microphone routed through a single foam insert. Levels of self-generated noise were computed in octave-wide bands. Age effects were evaluated for four groups: 4- to 6-year olds, 7- to 10-year olds, 11- to 16-year olds, and adults. Results: Consistent with previous data, the effect of child age on thresholds was robust at 250 Hz and fell off at higher frequencies; thresholds of even the youngest listeners were similar to adults' at 4000 Hz. Self-generated noise had a similar low-pass spectral shape for all age groups, although the magnitude of self-generated noise was higher in younger listeners. If self-generated noise impairs detection, then noise levels should be higher for trials associated with the wrong answer than the right answer. This association was observed for all listener groups at the 250-Hz signal frequency. For adults and older children, this association was limited to the noise band centered on the 250-Hz signal. For the two younger groups of children, this association was strongest at the signal frequency, but extended to bands spectrally remote from the 250-Hz signal. For the 1000-Hz signal frequency, there was a broadly tuned association between noise and response only for the two younger groups of children. For the 4000-Hz signal frequency, only the youngest group of children demonstrated an association between responses and noise levels, and this association was particularly pronounced for bands below the signal frequency. Conclusions: These results provide evidence that self-generated noise plays a role in the prolonged development of low-frequency detection thresholds in quiet. Some aspects of the results are consistent with the possibility that self-generated noise elevates thresholds via energetic masking, particularly at 250 Hz. The association between behavioral responses and noise spectrally remote from the signal frequency is also consistent with the idea that self-generated noise may also reflect contributions of more central factors (e.g., inattention to the task). Evaluation of self-generated noise could improve diagnosis of minimal or mild hearing loss. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2aeVV9U
via IFTTT

Pediatric Hearing Aid Management: Parent-Reported Needs for Learning Support.

Related Articles

Pediatric Hearing Aid Management: Parent-Reported Needs for Learning Support.

Ear Hear. 2016 Jul 19;

Authors: Muñoz K, Rusk SE, Nelson L, Preston E, White KR, Barrett TS, Twohig MP

Abstract
OBJECTIVES: The aim of this study was to investigate parent learning and support needs related to hearing aid management for young children, and factors that influence parent-reported hours of hearing aid use.
DESIGN: A cross-sectional survey design was used to collect survey data in seven states. The child's primary caregiver completed a demographic form, a questionnaire to explore parent learning and support needs as well as their challenges with hearing aid use, and the patient health questionnaire to identify symptoms of depression. Three hundred and eighteen parents completed the questionnaires.
RESULTS: Responses were analyzed for 318 parents of children (M = 23.15 months; SD = 10.43; range: 3 to 51) who had been wearing hearing aids (M = 15.52; SD = 10.11; range: <1 to 50 months). Even though the majority of parents reported receiving the educational support queried, approximately one-third wanted more information on a variety of topics such as loaner hearing aids, what their child can/cannot hear, financial assistance, how to meet other parents, how to do basic hearing aid maintenance, and how to keep the hearing aids on their child. The most frequently reported challenges that interfered with hearing aid use (rated often or always) were child activities, child not wanting to wear the hearing aids, and fear of losing or damaging the hearing aids. Forty-two percent of parents reported that, on good days, their child used hearing aids all waking hours. Multiple regression was used to compare the effect on parent-reported typical hours of hearing aid use based on good days for the variables of (1) presence of depressive symptoms for the parent, (2) child age, (3) family income, (4) primary caregiver education level, (5) presence of additional disabilities for the child, (6) degree of hearing loss, and (7) length of time since the child was fitted with hearing aids. There were statistically significantly fewer hours of reported hearing aid use when parents reported mild to severe symptoms of depression, lower income, less education level, and when children had mild hearing loss or additional disabilities.
CONCLUSION: Although parents reported overall that their needs for hearing aid education and support had generally been met, there were important suggestions for how audiologists and other service providers could better meet parent needs. Hearing aid use for young children was variable and influenced by a variety of factors. Understanding parent experiences and challenges can help audiologists more effectively focus support. Audiologists are more likely to meet the needs of families if they take care to provide access to thorough and comprehensive education and ongoing support that is tailored to address the unique needs of individual families.

PMID: 27438872 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2acEPYQ
via IFTTT

Contents List

Publication date: July 2016
Source:Gait & Posture, Volume 48





from #Audiology via ola Kala on Inoreader http://ift.tt/29W1p5H
via IFTTT

Knee loading patterns of the non-paretic and paretic legs during post-stroke gait.

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Stephanie Marrocco, Lucas Crosby, Ian C Jones, Rebecca F Moyer, Trevor B Birmingham, Kara K Patterson
BackgroundPost-stoke gait disorders could cause secondary musculoskeletal complications associated with excessive repetitive loading. The study objectives were to 1) determine the feasibility of measuring common proxies for dynamic medial knee joint loading during gait post-stroke with external knee adduction (KAM) and flexion moments (KFM) and 2) characterize knee loading and typical load-reducing compensations post-stroke.MethodsParticipants with stroke (n=9) and healthy individuals (n=17) underwent 3D gait analysis. The stroke and healthy groups were compared with unpaired t-tests on peak KAM and peak KFM and on typical medial knee joint load-reducing compensations; toe out and trunk lean. The relationship between KAM and load-reducing compensations in the stroke group were investigated with Spearman correlations.ResultsMean (SD) values for KAM and KFM in the healthy group[KAM=2.20 (0.88)%BW*ht; KFM=0.64 (0.60)%BW*ht] were not significantly different from the values for the paretic [KAM=2.64 (0.98)%BW*ht; KFM=1.26 (1.13) %BW*ht] or non-paretic leg of the stroke group[KAM=2.23(0.62)%BW*ht; KFM=1.10 (1.20)%BW*ht]. Post hoc one sample t-tests revealed greater loading in stroke participants on the paretic (n=3), non-paretic (n=1) and both legs (n=2) compared to the healthy group. The angle of trunk lean and the angle of toe out were not related to KAM in the stroke group.DiscussionMeasurement of limb loading during gait post-stroke is feasible and revealed excessive loading in individuals with mild to moderate stroke compared to healthy adults. Further investigation of potential joint degeneration and pain due to repetitive excessive loading associated with post-stroke gait is warranted.



from #Audiology via ola Kala on Inoreader http://ift.tt/29WgdFI
via IFTTT

Kinematic and electromyographic analysis in patients with patellofemoral pain syndrome during single leg triple hop test

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Marcelo Martins Kalytczak, Paulo Roberto Garcia Lucareli, Amir Curcio dos Reis, André Serra Bley, Daniela Aparecida Biasotto-Gonzalez, João Carlos Ferrari Correa, Fabiano Politti
Possible delays in pre-activation or deficiencies in the activity of the dynamic muscle stabilizers of the knee and hip joints are the most common causes of the patellofemoral pain syndrome (PFPS). The aim of the study was to compare kinematic variables and electromyographic activity of the vastus lateralis, biceps femoris, gluteus maximus and gluteus medius muscles between patients with PFPS and health subjects during the single leg triple hop test (SLTHT). This study included 14 female with PFPS (PFPS group) and 14 female healthy with no history of knee pain (Healthy group). Kinematic and EMG data ware collected through participants performed a single session of the SLTHT. The PFPS group exhibited a significant increase (p<0.05) in the EMG activity of the biceps femoris and vastus lateralis muscles, when compared with the healthy group in pre-activity and during the stance phase. This same result was also found for the vastus lateralis muscle (p<0.05) when analyzing the EMG activity during the eccentric phase of the stance phase. In kinematic analysis, no significant differences were found between the groups. These results indicate that biceps femoris and vastus lateralis muscles mainly during the pre-activation phase and stance phases of the SLTHT are more active in PFPS group among healthy group.



from #Audiology via ola Kala on Inoreader http://ift.tt/29W16aY
via IFTTT

Striatal functional connectivity changes following specific balance training in elderly people: MRI results of a randomized controlled pilot study

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Stefano Magon, Lars Donath, Laura Gaetano, Alain Thoeni, Ernst-Wilhelm Radue, Oliver Faude, Till Sprenger
BackgroundPractice-induced effects of specific balance training on brain structure and activity in elderly people are largely unknown.AimIn the present study, we investigated morphological and functional brain changes following slacking training (balancing over nylon ribbons) in a group of elderly people.MethodsTwenty-eight healthy volunteers were recruited and randomly assigned to the intervention (mean age: 62.3±5.4years) or control group (mean age: 61.8±5.3years). The intervention group completed six-weeks of slackline training. Brain morphological changes were investigated using voxel-based morphometry and functional connectivity changes were computed via independent component analysis and seed-based analyses. All analyses were applied to the whole sample and to a subgroup of participants who improved in slackline performance.ResultsThe repeated measures analysis of variance showed a significant interaction effect between groups and sessions. Specifically, the Tukey post-hoc analysis revealed a significantly improved slackline standing performance after training for the left leg stance time (pre: 4.5±3.6s vs. 26.0±30.0s, p<0.038) as well as for tandem stance time (pre: 1.4±0.6s vs. post: 4.5±4.0s, p=0.003) in the intervention group. No significant changes in balance performance were observed in the control group. The MRI analysis did not reveal morphological or functional connectivity differences before or after the training between the intervention and control groups (whole sample). However, subsequent analysis in subjects with improved slackline performance showed a decrease of connectivity between the striatum and other brain areas during the training period.ConclusionThese preliminary results suggest that improved balance performance with slackline training goes along with an increased efficiency of the striatal network.



from #Audiology via ola Kala on Inoreader http://ift.tt/29WgqbR
via IFTTT

Editorial Board

Publication date: July 2016
Source:Gait & Posture, Volume 48





from #Audiology via ola Kala on Inoreader http://ift.tt/29W0AtM
via IFTTT

Contents List

Publication date: July 2016
Source:Gait & Posture, Volume 48





from #Audiology via xlomafota13 on Inoreader http://ift.tt/29W1p5H
via IFTTT

Knee loading patterns of the non-paretic and paretic legs during post-stroke gait.

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Stephanie Marrocco, Lucas Crosby, Ian C Jones, Rebecca F Moyer, Trevor B Birmingham, Kara K Patterson
BackgroundPost-stoke gait disorders could cause secondary musculoskeletal complications associated with excessive repetitive loading. The study objectives were to 1) determine the feasibility of measuring common proxies for dynamic medial knee joint loading during gait post-stroke with external knee adduction (KAM) and flexion moments (KFM) and 2) characterize knee loading and typical load-reducing compensations post-stroke.MethodsParticipants with stroke (n=9) and healthy individuals (n=17) underwent 3D gait analysis. The stroke and healthy groups were compared with unpaired t-tests on peak KAM and peak KFM and on typical medial knee joint load-reducing compensations; toe out and trunk lean. The relationship between KAM and load-reducing compensations in the stroke group were investigated with Spearman correlations.ResultsMean (SD) values for KAM and KFM in the healthy group[KAM=2.20 (0.88)%BW*ht; KFM=0.64 (0.60)%BW*ht] were not significantly different from the values for the paretic [KAM=2.64 (0.98)%BW*ht; KFM=1.26 (1.13) %BW*ht] or non-paretic leg of the stroke group[KAM=2.23(0.62)%BW*ht; KFM=1.10 (1.20)%BW*ht]. Post hoc one sample t-tests revealed greater loading in stroke participants on the paretic (n=3), non-paretic (n=1) and both legs (n=2) compared to the healthy group. The angle of trunk lean and the angle of toe out were not related to KAM in the stroke group.DiscussionMeasurement of limb loading during gait post-stroke is feasible and revealed excessive loading in individuals with mild to moderate stroke compared to healthy adults. Further investigation of potential joint degeneration and pain due to repetitive excessive loading associated with post-stroke gait is warranted.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/29WgdFI
via IFTTT

Kinematic and electromyographic analysis in patients with patellofemoral pain syndrome during single leg triple hop test

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Marcelo Martins Kalytczak, Paulo Roberto Garcia Lucareli, Amir Curcio dos Reis, André Serra Bley, Daniela Aparecida Biasotto-Gonzalez, João Carlos Ferrari Correa, Fabiano Politti
Possible delays in pre-activation or deficiencies in the activity of the dynamic muscle stabilizers of the knee and hip joints are the most common causes of the patellofemoral pain syndrome (PFPS). The aim of the study was to compare kinematic variables and electromyographic activity of the vastus lateralis, biceps femoris, gluteus maximus and gluteus medius muscles between patients with PFPS and health subjects during the single leg triple hop test (SLTHT). This study included 14 female with PFPS (PFPS group) and 14 female healthy with no history of knee pain (Healthy group). Kinematic and EMG data ware collected through participants performed a single session of the SLTHT. The PFPS group exhibited a significant increase (p<0.05) in the EMG activity of the biceps femoris and vastus lateralis muscles, when compared with the healthy group in pre-activity and during the stance phase. This same result was also found for the vastus lateralis muscle (p<0.05) when analyzing the EMG activity during the eccentric phase of the stance phase. In kinematic analysis, no significant differences were found between the groups. These results indicate that biceps femoris and vastus lateralis muscles mainly during the pre-activation phase and stance phases of the SLTHT are more active in PFPS group among healthy group.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/29W16aY
via IFTTT

Striatal functional connectivity changes following specific balance training in elderly people: MRI results of a randomized controlled pilot study

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Stefano Magon, Lars Donath, Laura Gaetano, Alain Thoeni, Ernst-Wilhelm Radue, Oliver Faude, Till Sprenger
BackgroundPractice-induced effects of specific balance training on brain structure and activity in elderly people are largely unknown.AimIn the present study, we investigated morphological and functional brain changes following slacking training (balancing over nylon ribbons) in a group of elderly people.MethodsTwenty-eight healthy volunteers were recruited and randomly assigned to the intervention (mean age: 62.3±5.4years) or control group (mean age: 61.8±5.3years). The intervention group completed six-weeks of slackline training. Brain morphological changes were investigated using voxel-based morphometry and functional connectivity changes were computed via independent component analysis and seed-based analyses. All analyses were applied to the whole sample and to a subgroup of participants who improved in slackline performance.ResultsThe repeated measures analysis of variance showed a significant interaction effect between groups and sessions. Specifically, the Tukey post-hoc analysis revealed a significantly improved slackline standing performance after training for the left leg stance time (pre: 4.5±3.6s vs. 26.0±30.0s, p<0.038) as well as for tandem stance time (pre: 1.4±0.6s vs. post: 4.5±4.0s, p=0.003) in the intervention group. No significant changes in balance performance were observed in the control group. The MRI analysis did not reveal morphological or functional connectivity differences before or after the training between the intervention and control groups (whole sample). However, subsequent analysis in subjects with improved slackline performance showed a decrease of connectivity between the striatum and other brain areas during the training period.ConclusionThese preliminary results suggest that improved balance performance with slackline training goes along with an increased efficiency of the striatal network.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/29WgqbR
via IFTTT

Contents List

Publication date: July 2016
Source:Gait & Posture, Volume 48





from #Audiology via ola Kala on Inoreader http://ift.tt/29W1p5H
via IFTTT

Editorial Board

Publication date: July 2016
Source:Gait & Posture, Volume 48





from #Audiology via xlomafota13 on Inoreader http://ift.tt/29W0AtM
via IFTTT

Knee loading patterns of the non-paretic and paretic legs during post-stroke gait.

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Stephanie Marrocco, Lucas Crosby, Ian C Jones, Rebecca F Moyer, Trevor B Birmingham, Kara K Patterson
BackgroundPost-stoke gait disorders could cause secondary musculoskeletal complications associated with excessive repetitive loading. The study objectives were to 1) determine the feasibility of measuring common proxies for dynamic medial knee joint loading during gait post-stroke with external knee adduction (KAM) and flexion moments (KFM) and 2) characterize knee loading and typical load-reducing compensations post-stroke.MethodsParticipants with stroke (n=9) and healthy individuals (n=17) underwent 3D gait analysis. The stroke and healthy groups were compared with unpaired t-tests on peak KAM and peak KFM and on typical medial knee joint load-reducing compensations; toe out and trunk lean. The relationship between KAM and load-reducing compensations in the stroke group were investigated with Spearman correlations.ResultsMean (SD) values for KAM and KFM in the healthy group[KAM=2.20 (0.88)%BW*ht; KFM=0.64 (0.60)%BW*ht] were not significantly different from the values for the paretic [KAM=2.64 (0.98)%BW*ht; KFM=1.26 (1.13) %BW*ht] or non-paretic leg of the stroke group[KAM=2.23(0.62)%BW*ht; KFM=1.10 (1.20)%BW*ht]. Post hoc one sample t-tests revealed greater loading in stroke participants on the paretic (n=3), non-paretic (n=1) and both legs (n=2) compared to the healthy group. The angle of trunk lean and the angle of toe out were not related to KAM in the stroke group.DiscussionMeasurement of limb loading during gait post-stroke is feasible and revealed excessive loading in individuals with mild to moderate stroke compared to healthy adults. Further investigation of potential joint degeneration and pain due to repetitive excessive loading associated with post-stroke gait is warranted.



from #Audiology via ola Kala on Inoreader http://ift.tt/29WgdFI
via IFTTT

Kinematic and electromyographic analysis in patients with patellofemoral pain syndrome during single leg triple hop test

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Marcelo Martins Kalytczak, Paulo Roberto Garcia Lucareli, Amir Curcio dos Reis, André Serra Bley, Daniela Aparecida Biasotto-Gonzalez, João Carlos Ferrari Correa, Fabiano Politti
Possible delays in pre-activation or deficiencies in the activity of the dynamic muscle stabilizers of the knee and hip joints are the most common causes of the patellofemoral pain syndrome (PFPS). The aim of the study was to compare kinematic variables and electromyographic activity of the vastus lateralis, biceps femoris, gluteus maximus and gluteus medius muscles between patients with PFPS and health subjects during the single leg triple hop test (SLTHT). This study included 14 female with PFPS (PFPS group) and 14 female healthy with no history of knee pain (Healthy group). Kinematic and EMG data ware collected through participants performed a single session of the SLTHT. The PFPS group exhibited a significant increase (p<0.05) in the EMG activity of the biceps femoris and vastus lateralis muscles, when compared with the healthy group in pre-activity and during the stance phase. This same result was also found for the vastus lateralis muscle (p<0.05) when analyzing the EMG activity during the eccentric phase of the stance phase. In kinematic analysis, no significant differences were found between the groups. These results indicate that biceps femoris and vastus lateralis muscles mainly during the pre-activation phase and stance phases of the SLTHT are more active in PFPS group among healthy group.



from #Audiology via ola Kala on Inoreader http://ift.tt/29W16aY
via IFTTT

Striatal functional connectivity changes following specific balance training in elderly people: MRI results of a randomized controlled pilot study

Publication date: Available online 19 July 2016
Source:Gait & Posture
Author(s): Stefano Magon, Lars Donath, Laura Gaetano, Alain Thoeni, Ernst-Wilhelm Radue, Oliver Faude, Till Sprenger
BackgroundPractice-induced effects of specific balance training on brain structure and activity in elderly people are largely unknown.AimIn the present study, we investigated morphological and functional brain changes following slacking training (balancing over nylon ribbons) in a group of elderly people.MethodsTwenty-eight healthy volunteers were recruited and randomly assigned to the intervention (mean age: 62.3±5.4years) or control group (mean age: 61.8±5.3years). The intervention group completed six-weeks of slackline training. Brain morphological changes were investigated using voxel-based morphometry and functional connectivity changes were computed via independent component analysis and seed-based analyses. All analyses were applied to the whole sample and to a subgroup of participants who improved in slackline performance.ResultsThe repeated measures analysis of variance showed a significant interaction effect between groups and sessions. Specifically, the Tukey post-hoc analysis revealed a significantly improved slackline standing performance after training for the left leg stance time (pre: 4.5±3.6s vs. 26.0±30.0s, p<0.038) as well as for tandem stance time (pre: 1.4±0.6s vs. post: 4.5±4.0s, p=0.003) in the intervention group. No significant changes in balance performance were observed in the control group. The MRI analysis did not reveal morphological or functional connectivity differences before or after the training between the intervention and control groups (whole sample). However, subsequent analysis in subjects with improved slackline performance showed a decrease of connectivity between the striatum and other brain areas during the training period.ConclusionThese preliminary results suggest that improved balance performance with slackline training goes along with an increased efficiency of the striatal network.



from #Audiology via ola Kala on Inoreader http://ift.tt/29WgqbR
via IFTTT

Editorial Board

Publication date: July 2016
Source:Gait & Posture, Volume 48





from #Audiology via ola Kala on Inoreader http://ift.tt/29W0AtM
via IFTTT