.
from #Audiology via ola Kala on Inoreader http://ift.tt/2okwSGQ
via IFTTT
OtoRhinoLaryngology by Sfakianakis G.Alexandros Sfakianakis G.Alexandros,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,tel : 00302841026182,00306932607174
Πέμπτη 22 Φεβρουαρίου 2018
Semantic context improves speech intelligibility and reduces listening effort for listeners with hearing impairment
Proportion and characteristics of patients who were offered, enrolled in and completed audiologist-delivered cognitive behavioural therapy for tinnitus and hyperacusis rehabilitation in a specialist UK clinic
A method for determining precise electrical hearing thresholds in cochlear implant users
Personality traits predict and moderate the outcome of Internet-based cognitive behavioural therapy for chronic tinnitus
Further validation of the Chinese (Mandarin) Tinnitus Handicap Inventory: comparison between patient-reported and clinician-interviewed outcomes
“There are more important things to worry about”: attitudes and behaviours towards leisure noise and use of hearing protection in young adults
Auditory brainstem, middle and late latency responses to short gaps in noise at different presentation rates
The relationship between speech recognition, behavioural listening effort, and subjective ratings
Semantic context improves speech intelligibility and reduces listening effort for listeners with hearing impairment
Proportion and characteristics of patients who were offered, enrolled in and completed audiologist-delivered cognitive behavioural therapy for tinnitus and hyperacusis rehabilitation in a specialist UK clinic
A method for determining precise electrical hearing thresholds in cochlear implant users
Personality traits predict and moderate the outcome of Internet-based cognitive behavioural therapy for chronic tinnitus
Further validation of the Chinese (Mandarin) Tinnitus Handicap Inventory: comparison between patient-reported and clinician-interviewed outcomes
“There are more important things to worry about”: attitudes and behaviours towards leisure noise and use of hearing protection in young adults
Auditory brainstem, middle and late latency responses to short gaps in noise at different presentation rates
The relationship between speech recognition, behavioural listening effort, and subjective ratings
Semantic context improves speech intelligibility and reduces listening effort for listeners with hearing impairment
Proportion and characteristics of patients who were offered, enrolled in and completed audiologist-delivered cognitive behavioural therapy for tinnitus and hyperacusis rehabilitation in a specialist UK clinic
A method for determining precise electrical hearing thresholds in cochlear implant users
Personality traits predict and moderate the outcome of Internet-based cognitive behavioural therapy for chronic tinnitus
Further validation of the Chinese (Mandarin) Tinnitus Handicap Inventory: comparison between patient-reported and clinician-interviewed outcomes
“There are more important things to worry about”: attitudes and behaviours towards leisure noise and use of hearing protection in young adults
Auditory brainstem, middle and late latency responses to short gaps in noise at different presentation rates
The relationship between speech recognition, behavioural listening effort, and subjective ratings
Longitudinal Changes in Electrically Evoked Auditory Event-Related Potentials in Children With Auditory Brainstem Implants: Preliminary Results Recorded Over 3 Years
Objectives: This preliminary study aimed (1) to assess longitudinal changes in electrically evoked auditory event-related potentials (eERPs) in children with auditory brainstem implants (ABIs) and (2) to explore whether these changes could be accounted for by maturation in the central auditory system of these patients. Design: Study participants included 5 children (S1 to S5) with an ABI in the affected ear. The stimulus was a train of electrical pulses delivered to individual ABI electrodes via a research interface. For each subject, the eERP was repeatedly measured in multiple test sessions scheduled over up to 41 months after initial device activation. Longitudinal changes in eERPs recorded for each ABI electrode were evaluated using intraclass correlation tests for each subject. Results: eERPs recorded in S1 showed notable morphological changes for five ABI electrodes over 41 months. In parallel, signs or symptoms of nonauditory stimulation elicited by these electrodes were observed or reported at 41 months. eERPs could not be observed in S2 after 9 months of ABI use but were recorded at 12 months after initial stimulation. Repeatable eERPs were recorded in S3 in the first 9 months. However, these responses were either absent or showed remarkable morphological changes at 30 months. Longitudinal changes in eERP waveform morphology recorded in S4 and S5 were also observed. Conclusions: eERP responses in children with ABIs could change over a long period of time. Maturation of the central auditory system could not fully account for these observed changes. Children with ABIs need to be closely monitored for potential changes in auditory perception and unfavorable nonauditory sensations. Neuroimaging correlates are needed to better understand the emergence of nonauditory stimulation over time in these children.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BMgQgb
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BMgQgb
via IFTTT
Letter to the Editor: Johnson, J. A., Xu, J., Cox, R. M. (2017). Impact of Hearing Aid Technology on Outcomes in Daily Life III Localization, Ear Hear, 38, 746–759
Mindfulness-Based Cognitive Therapy for Chronic Tinnitus: Evaluation of Benefits in a Large Sample of Patients Attending a Tinnitus Clinic
Objectives: Mindfulness-based approaches may benefit patients with chronic tinnitus, but most evidence is from small studies of nonstandardized interventions, and there is little exploration of the processes of change. This study describes the impact of mindfulness-based cognitive therapy (MBCT) in a “real world” tinnitus clinic, using standardized MBCT on the largest sample of patients with chronic tinnitus to date while exploring predictors of change. Design: Participants were 182 adults with chronic and distressing tinnitus who completed an 8-week MBCT group. Measures of tinnitus-related distress, psychological distress, tinnitus acceptance, and mindfulness were taken preintervention, postintervention, and at 6-week follow-up. Results: MBCT was associated with significant improvements on all outcome measures. Postintervention, reliable improvements were detected in tinnitus-related distress in 50% and in psychological distress in 41.2% of patients. Changes in mindfulness and tinnitus acceptance explained unique variance in tinnitus-related and psychological distress postintervention. Conclusions: MBCT was associated with significant and reliable improvements in patients with chronic, distressing tinnitus. Changes were associated with increases in tinnitus acceptance and dispositional mindfulness. This study doubles the combined sample size of all previously published studies. Randomized controlled trials of standardized MBCT protocols are now required to test whether MBCT might offer a new and effective treatment for chronic tinnitus.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Ce4nD3
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Ce4nD3
via IFTTT
Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss
Objectives: The first objective was to determine the relationship between speech level, noise level, and signal to noise ratio (SNR), as well as the distribution of SNR, in real-world situations wherein older adults with hearing loss are listening to speech. The second objective was to develop a set of prototype listening situations (PLSs) that describe the speech level, noise level, SNR, availability of visual cues, and locations of speech and noise sources of typical speech listening situations experienced by these individuals. Design: Twenty older adults with mild to moderate hearing loss carried digital recorders for 5 to 6 weeks to record sounds for 10 hours per day. They also repeatedly completed in situ surveys on smartphones several times per day to report the characteristics of their current environments, including the locations of the primary talker (if they were listening to speech) and noise source (if it was noisy) and the availability of visual cues. For surveys where speech listening was indicated, the corresponding audio recording was examined. Speech-plus-noise and noise-only segments were extracted, and the SNR was estimated using a power subtraction technique. SNRs and the associated survey data were subjected to cluster analysis to develop PLSs. Results: The speech level, noise level, and SNR of 894 listening situations were analyzed to address the first objective. Results suggested that as noise levels increased from 40 to 74 dBA, speech levels systematically increased from 60 to 74 dBA, and SNR decreased from 20 to 0 dB. Most SNRs (62.9%) of the collected recordings were between 2 and 14 dB. Very noisy situations that had SNRs below 0 dB comprised 7.5% of the listening situations. To address the second objective, recordings and survey data from 718 observations were analyzed. Cluster analysis suggested that the participants’ daily listening situations could be grouped into 12 clusters (i.e., 12 PLSs). The most frequently occurring PLSs were characterized as having the talker in front of the listener with visual cues available, either in quiet or in diffuse noise. The mean speech level of the PLSs that described quiet situations was 62.8 dBA, and the mean SNR of the PLSs that represented noisy environments was 7.4 dB (speech = 67.9 dBA). A subset of observations (n = 280), which was obtained by excluding the data collected from quiet environments, was further used to develop PLSs that represent noisier situations. From this subset, two PLSs were identified. These two PLSs had lower SNRs (mean = 4.2 dB), but the most frequent situations still involved speech from in front of the listener in diffuse noise with visual cues available. Conclusions: The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CeCcUK
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CeCcUK
via IFTTT
Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners’ abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqiKuX
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqiKuX
via IFTTT
Dual-Task Walking Performance in Older Persons With Hearing Impairment: Implications for Interventions From a Preliminary Observational Study
Objectives: Adults with “hearing loss” have an increased falls risks. There may be an association between hearing impairment and walking performance under dual-task (DT) and triple-task (TT) conditions. The aim of this study was to identify DT and TT effects on walking speed, step length, and cadence in adults with hearing impairment, previous falls, and physical limitations. Design: The observational study included 73 community-dwelling older people seeking audiology services. Data were collected on sociodemographic characteristics, previous falls, fear of falling, physical limitations, and walking performance under three task conditions. Differences between the task conditions (single task [ST], DT, and TT) and the hearing groups were analyzed with a two-way ANOVA with repeated measures. The influence of fall risks and limited physical functioning on walking under ST, DT, and TT conditions was analyzed with ANOVAs, with ST, DT, and TT performance as repeated measurement factor (i.e., walking speed, step length and Cadence × Previous falls, or short physical performance battery
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CgRJ6g
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CgRJ6g
via IFTTT
Speech Perception in Noise and Listening Effort of Older Adults With Nonlinear Frequency Compression Hearing Aids
Objectives: The purpose of this laboratory-based study was to compare the efficacy of two hearing aid fittings with and without nonlinear frequency compression, implemented within commercially available hearing aids. Previous research regarding the utility of nonlinear frequency compression has revealed conflicting results for speech recognition, marked by high individual variability. Individual differences in auditory function and cognitive abilities, specifically hearing loss slope and working memory, may contribute to aided performance. The first aim of the study was to determine the effect of nonlinear frequency compression on aided speech recognition in noise and listening effort using a dual-task test paradigm. The hypothesis, based on the Ease of Language Understanding model, was that nonlinear frequency compression would improve speech recognition in noise and decrease listening effort. The second aim of the study was to determine if listener variables of hearing loss slope, working memory capacity, and age would predict performance with nonlinear frequency compression. Design: A total of 17 adults (age, 57–85 years) with symmetrical sensorineural hearing loss were tested in the sound field using hearing aids fit to target (NAL-NL2). Participants were recruited with a range of hearing loss severities and slopes. A within-subjects, single-blinded design was used to compare performance with and without nonlinear frequency compression. Speech recognition in noise and listening effort were measured by adapting the Revised Speech in Noise Test into a dual-task paradigm. Participants were required trial-by-trial to repeat the last word of each sentence presented in speech babble and then recall the sentence-ending words after every block of six sentences. Half of the sentences were rich in context for the recognition of the final word of each sentence, and half were neutral in context. Extrinsic factors of sentence context and nonlinear frequency compression were manipulated, and intrinsic factors of hearing loss slope, working memory capacity, and age were measured to determine which participant factors were associated with benefit from nonlinear frequency compression. Results: On average, speech recognition in noise performance significantly improved with the use of nonlinear frequency compression. Individuals with steeply sloping hearing loss received more recognition benefit. Recall performance also significantly improved at the group level, with nonlinear frequency compression revealing reduced listening effort. The older participants within the study cohort received less recall benefit than the younger participants. The benefits of nonlinear frequency compression for speech recognition and listening effort did not correlate with each other, suggesting separable sources of benefit for these outcome measures. Conclusions: Improvements of speech recognition in noise and reduced listening effort indicate that adult hearing aid users can receive benefit from nonlinear frequency compression in a noisy environment, with the amount of benefit varying across individuals and across outcome measures. Evidence supports individualized selection of nonlinear frequency compression, with results suggesting benefits in speech recognition for individuals with steeply sloping hearing losses and in listening effort for younger individuals. Future research is indicated with a larger data set on the dual-task paradigm as a potential cognitive outcome measure.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BK145D
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BK145D
via IFTTT
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences
Objectives: Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. Design: One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. Results: In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Conclusions: Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CchPr6
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CchPr6
via IFTTT
Discrimination of Voice Pitch and Vocal-Tract Length in Cochlear Implant Users
Objectives: When listening to two competing speakers, normal-hearing (NH) listeners can take advantage of voice differences between the speakers. Users of cochlear implants (CIs) have difficulty in perceiving speech on speech. Previous literature has indicated sensitivity to voice pitch (related to fundamental frequency, F0) to be poor among implant users, while sensitivity to vocal-tract length (VTL; related to the height of the speaker and formant frequencies), the other principal voice characteristic, has not been directly investigated in CIs. A few recent studies evaluated F0 and VTL perception indirectly, through voice gender categorization, which relies on perception of both voice cues. These studies revealed that, contrary to prior literature, CI users seem to rely exclusively on F0 while not utilizing VTL to perform this task. The objective of the present study was to directly and systematically assess raw sensitivity to F0 and VTL differences in CI users to define the extent of the deficit in voice perception. Design: The just-noticeable differences (JNDs) for F0 and VTL were measured in 11 CI listeners using triplets of consonant–vowel syllables in an adaptive three-alternative forced choice method. Results: The results showed that while NH listeners had average JNDs of 1.95 and 1.73 semitones (st) for F0 and VTL, respectively, CI listeners showed JNDs of 9.19 and 7.19 st. These JNDs correspond to differences of 70% in F0 and 52% in VTL. For comparison to the natural range of voices in the population, the F0 JND in CIs remains smaller than the typical male–female F0 difference. However, the average VTL JND in CIs is about twice as large as the typical male–female VTL difference. Conclusions: These findings, thus, directly confirm that CI listeners do not seem to have sufficient access to VTL cues, likely as a result of limited spectral resolution, and, hence, that CI listeners’ voice perception deficit goes beyond poor perception of F0. These results provide a potential common explanation not only for a number of deficits observed in CI listeners, such as voice identification and gender categorization, but also for competing speech perception.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqzthG
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqzthG
via IFTTT
The Effect of Signal to Noise Ratio on Cortical Auditory–Evoked Potentials Elicited to Speech Stimuli in Infants and Adults With Normal Hearing
Objectives: Identification and discrimination of speech sounds in noisy environments is challenging for adults and even more so for infants and children. Behavioral studies consistently report maturational differences in the influence that signal to noise ratio (SNR) and masker type have on speech processing; however, few studies have investigated the neural mechanisms underlying these differences at the level of the auditory cortex. In the present study, we investigated the effect of different SNRs on speech-evoked cortical auditory–evoked potentials (CAEPs) in infants and adults with normal hearing. Design: A total of 10 adults (mean age 24.1 years) and 15 infants (mean age 30.7 weeks), all with normal hearing, were included in the data analyses. CAEPs were evoked to /m/ and /t/ speech stimuli (duration: 79 ms) presented at 75 dB SPL in the sound field with a jittered interstimulus interval of 1000–1200 ms. Each of the stimuli were presented in quiet and in the presence of white noise (SNRs of 10, 15, and 20 dB). Amplitude and latency measures were compared for P1, N1, and P2 for adults and for the large positivity (P) and following negativity (N: N250 and/or N450) for infants elicited in quiet and across SNR conditions. Results: Infant P-N responses to /t/ showed no statistically significant amplitude and latency effects across SNR conditions; in contrast, infant CAEPs to /m/ were greatly reduced in amplitude and delayed in latency. Responses were more frequently absent for SNRs of 20 dB or less. Adult P1-N1-P2 responses were present for all SNRs for /t/ and most SNRs for /m/ (two adults had no responses to /m/ for SNR 10); significant effects of SNR were found for P1, N1, and P2 amplitude and latencies. Conclusions: The findings of the present study support that SNR effects on CAEP amplitudes and latencies in infants cannot be generalized across different types of speech stimuli and cannot be predicted from adult data. These findings also suggest that factors other than energetic masking are contributing to the immaturities in the SNR effects for infants. How these CAEP findings relate to an infant’s capacity to process speech-in-noise perceptually has yet to be established; however, we can be confident that the presence of CAEPs to a speech stimulus in noise means that the stimulus is detected at the level of the auditory cortex. The absence of a response should be interpreted with caution as further studies are needed to investigate a range of different speech stimuli and SNRs, in conjunction with behavioral measures, to confirm that infant CAEPs do indeed reflect functional auditory capacity to process speech stimuli in noise.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqyXAg
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqyXAg
via IFTTT
Responsiveness of the Electrically Stimulated Cochlear Nerve in Children With Cochlear Nerve Deficiency
Objectives: This study aimed to (1) investigate the responsiveness of the cochlear nerve (CN) to a single biphasic-electrical pulse in implanted children with cochlear nerve deficiency (CND) and (2) compare their results with those measured in implanted children with normal-size CNs. Design: Participants included 23 children with CND (CND1 to CND23) and 18 children with normal-size CNs (S1 to S18). All subjects except for CND1 used Cochlear Nucleus cochlear implants with contour electrode arrays in their test ears. CND1 was implanted with a Cochlear Nucleus Freedom cochlear implant with a straight electrode array in the test ear. For each subject, the CN input/output (I/O) function and the refractory recovery function were measured using electrophysiological measures of the electrically evoked compound action potential (eCAP) at multiple electrode sites across the electrode array. Dependent variables included eCAP threshold, the maximum eCAP amplitude, slope of the I/O function, and time-constants of the refractory recovery function. Slopes of I/O functions were estimated using statistical modeling with a sigmoidal function. Recovery time-constants, including measures of the absolute refractory period and the relative refractory period, were estimated using statistical modeling with an exponential decay function. Generalized linear mixed-effect models were used to evaluate the effects of electrode site on the dependent variables measured in children with CND and to compare results of these dependent variables between subject groups. Results: The eCAP was recorded at all test electrodes in children with normal-size CNs. In contrast, the eCAP could not be recorded at any electrode site in 4 children with CND. For all other children with CND, the percentage of electrodes with measurable eCAPs decreased as the stimulating site moved in a basal-to-apical direction. For children with CND, the stimulating site had a significant effect on the slope of the I/O functions and the relative refractory period but showed no significant effect on eCAP threshold and the maximum eCAP amplitude. Children with CND had significantly higher eCAP thresholds, smaller maximum eCAP amplitudes, flatter slopes of I/O functions, and longer absolute refractory periods than children with normal-size CNs. There was no significant difference in the relative refractory period measured in these two subject groups. Conclusions: In children with CND, the functional status of the CN varied along the length of the cochlea. Compared with children with normal-size CNs, children with CND showed reduced CN responsiveness to electrical stimuli. The prolonged CN absolute refractory period in children with CND might account for, at least partially, the observed benefit of using relatively slow pulse rate in these patients.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CdqTfu
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CdqTfu
via IFTTT
Pediatric Auditory Brainstem Implantation: Surgical, Electrophysiologic, and Behavioral Outcomes
Objectives: The objectives of this study were to demonstrate the safety of auditory brainstem implant (ABI) surgery and document the subsequent development of auditory and spoken language skills in children without neurofibromatosis type II (NFII). Design: A prospective, single-subject observational study of ABI in children without NFII was undertaken at the University of North Carolina at Chapel Hill. Five children were enrolled under an investigational device exemption sponsored by the investigators. Over 3 years, patient demographics, medical/surgical findings, complications, device mapping, electrophysiologic measures, audiologic outcomes, and speech and language measures were collected. Results: Five children without NFII have received ABIs to date without permanent medical sequelae, although 2 children required treatment after surgery for temporary complications. All children wear their device daily, and the benefits of sound awareness have developed slowly. Intra-and postoperative electrophysiologic measures augmented surgical placement and device programming. The slow development of audition skills precipitated limited changes in speech production but had little impact on growth in spoken language. Conclusions: ABI surgery is safe in young children without NFII. Benefits from device use develop slowly and include sound awareness and the use of pattern and timing aspects of sound. These skills may augment progress in speech production but progress in language development is dependent upon visual communication. Further monitoring of this cohort is needed to better delineate the benefits of this intervention in this patient population.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BKmM9E
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BKmM9E
via IFTTT
Characterizing the Age and Stimulus Frequency Interaction for Ocular Vestibular-Evoked Myogenic Potentials
Objectives: The normal process of aging is mostly associated with global decline in almost all sensory aspects of the human body. While aging affects the 500-Hz tone burst–evoked ocular vestibular-evoked myogenic potentials (oVEMPs) by reducing the amplitudes and prolonging the latencies, its interaction with oVEMP responses at other frequencies has not been studied. Therefore, the present study aimed at investigating the impact of advancing age on the frequency tuning of oVEMP. Design: Using a cross-sectional research design, oVEMPs were recorded for tone burst frequencies of 250, 500, 750, 1000, 1500, and 2000 Hz from 270 healthy individuals divided into six age groups (10–20, 20–30, 30–40, 40–50, 50–60, and >60 years). Results: The results revealed significantly lower response rates and amplitudes in age groups above 50 years of age than all the other groups at nearly all the frequencies (p
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CdqN7C
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CdqN7C
via IFTTT
Assessing the Relationship Between the Electrically Evoked Compound Action Potential and Speech Recognition Abilities in Bilateral Cochlear Implant Recipients
Objectives: The primary objective of the present study was to examine the relationship between suprathreshold electrically evoked compound action potential (ECAP) measures and speech recognition abilities in bilateral cochlear implant listeners. We tested the hypothesis that the magnitude of ear differences in ECAP measures within a subject (right–left) could predict the difference in speech recognition performance abilities between that subject’s ears (right–left). Design: To better control for across-subject variables that contribute to speech understanding, the present study used a within-subject design. Subjects were 10 bilaterally implanted adult cochlear implant recipients. We measured ECAP amplitudes and slopes of the amplitude growth function in both ears for each subject. We examined how each of these measures changed when increasing the interphase gap of the biphasic pulses. Previous animal studies have shown correlations between these ECAP measures and auditory nerve survival. Speech recognition measures included speech reception thresholds for sentences in background noise, as well as phoneme discrimination in quiet and in noise. Results: Results showed that the between-ear difference (right–left) of one specific ECAP measure (increase in amplitude growth function slope as the interphase gap increased from 7 to 30 µs) was significantly related to the between-ear difference (right–left) in speech recognition. Frequency-specific response patterns for ECAP data and consonant transmission cues support the hypothesis that this particular ECAP measure may represent localized functional acuity. Conclusions: The results add to a growing body of literature suggesting that when using a well-controlled research design, there is evidence that underlying neural function is related to postoperative performance with a cochlear implant.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BJrLaE
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BJrLaE
via IFTTT
Stability of Auditory Steady State Responses Over Time
Objectives: Auditory steady state responses (ASSRs) are used in clinical practice for objective hearing assessments. The response is called steady state because it is assumed to be stable over time, and because it is evoked by a stimulus with a certain periodicity, which will lead to discrete frequency components that are stable in amplitude and phase over time. However, the stimuli commonly used to evoke ASSRs are also known to be able to induce loudness adaptation behaviorally. Researchers and clinicians using ASSRs assume that the response remains stable over time. This study investigates (1) the stability of ASSR amplitudes over time, within one recording, and (2) whether loudness adaptation can be reflected in ASSRs. Design: ASSRs were measured from 14 normal-hearing participants. The ASSRs were evoked by the stimuli that caused the most loudness adaptation in a previous behavioral study, that is, mixed-modulated sinusoids with carrier frequencies of either 500 or 2000 Hz, a modulation frequency of 40 Hz, and a low sensation level of 30 dB SL. For each carrier frequency and participant, 40 repetitions of 92 sec recordings were made. Two types of analyses were used to investigate the ASSR amplitudes over time: with the more traditionally used Fast Fourier Transform and with a novel Kalman filtering approach. Robust correlations between the ASSR amplitudes and behavioral loudness adaptation ratings were also calculated. Results: Overall, ASSR amplitudes were stable. Over all individual recordings, the median change of the amplitudes over time was −0.0001 μV/s. Based on group analysis, a significant but very weak decrease in amplitude over time was found, with the decrease in amplitude over time around −0.0002 μV/s. Correlation coefficients between ASSR amplitudes and behavioral loudness adaptation ratings were significant but low to moderate, with r = 0.27 and r = 0.39 for the 500 and 2000 Hz carrier frequency, respectively. Conclusions: The decrease in amplitude of ASSRs over time (92 sec) is small. Consequently, it is safe to use ASSRs in clinical practice, and additional correction factors for objective hearing assessments are not needed. Because only small decreases in amplitudes were found, loudness adaptation is probably not reflected by the ASSRs.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CdqIkk
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CdqIkk
via IFTTT
Patients’ and Clinicians’ Views of the Psychological Components of Tinnitus Treatment That Could Inform Audiologists’ Usual Care: A Delphi Survey
Objectives: The aim of this study was to determine which components of psychological therapies are most important and appropriate to inform audiologists’ usual care for people with tinnitus. Design: A 39-member panel of patients, audiologists, hearing therapists, and psychologists completed a three-round Delphi survey to reach consensus on essential components of audiologist-delivered psychologically informed care for tinnitus. Results: Consensus (≥80% agreement) was reached on including 76 of 160 components. No components reached consensus for exclusion. The components reaching consensus were predominantly common therapeutic skills such as Socratic questioning and active listening, rather than specific techniques, for example, graded exposure therapy or cognitive restructuring. Consensus on educational components to include largely concerned psychological models of tinnitus rather than neurophysiological information. Conclusions: The results of this Delphi survey provide a tool to develop audiologists’ usual tinnitus care using components that both patients and clinicians agree are important and appropriate to be delivered by an audiologist for adults with tinnitus-related distress. Research is now necessary to test the added effects of these components when delivered by audiologists.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BMimPL
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BMimPL
via IFTTT
Communicative Function Use of Preschoolers and Mothers From Differing Racial and Socioeconomic Groups
Purpose
This study explores whether communicative function (CF: reasons for communicating) use differs by socioeconomic status (SES), race/ethnicity, or gender among preschoolers and their mothers.
Method
Mother–preschooler dyads (N = 95) from the National Center for Early Development and Learning's (2005) study of family and social environments were observed during 1 structured learning and free-play interaction. CFs were coded by trained independent raters.
Results
Children used all CFs at similar rates, but those from low SES homes produced fewer utterances and less reasoning, whereas boys used less self-maintaining and more predicting. African American mothers produced more directing and less responding than European American and Latino American mothers, and Latino American mothers produced more utterances than European American mothers. Mothers from low SES homes did more directing and less responding.
Conclusions
Mothers exhibited more sociocultural differences in CFs than children; this suggests that maternal demographic characteristics may influence CF production more than child demographics at school entry. Children from low SES homes talking less and boys producing less self-maintaining coincided with patterns previously detected in pragmatic literature. Overall, preschoolers from racial/ethnic minority and low SES homes were not less deft with CF usage, which may inform how their pragmatic skills are described.
Supplemental Material
https://doi.org/10.23641/asha.5890255from #Audiology via ola Kala on Inoreader http://ift.tt/2oqc9k1
via IFTTT
Communicative Function Use of Preschoolers and Mothers From Differing Racial and Socioeconomic Groups
Purpose
This study explores whether communicative function (CF: reasons for communicating) use differs by socioeconomic status (SES), race/ethnicity, or gender among preschoolers and their mothers.
Method
Mother–preschooler dyads (N = 95) from the National Center for Early Development and Learning's (2005) study of family and social environments were observed during 1 structured learning and free-play interaction. CFs were coded by trained independent raters.
Results
Children used all CFs at similar rates, but those from low SES homes produced fewer utterances and less reasoning, whereas boys used less self-maintaining and more predicting. African American mothers produced more directing and less responding than European American and Latino American mothers, and Latino American mothers produced more utterances than European American mothers. Mothers from low SES homes did more directing and less responding.
Conclusions
Mothers exhibited more sociocultural differences in CFs than children; this suggests that maternal demographic characteristics may influence CF production more than child demographics at school entry. Children from low SES homes talking less and boys producing less self-maintaining coincided with patterns previously detected in pragmatic literature. Overall, preschoolers from racial/ethnic minority and low SES homes were not less deft with CF usage, which may inform how their pragmatic skills are described.
Supplemental Material
https://doi.org/10.23641/asha.5890255from #Audiology via ola Kala on Inoreader http://ift.tt/2oqc9k1
via IFTTT
Communicative Function Use of Preschoolers and Mothers From Differing Racial and Socioeconomic Groups
Purpose
This study explores whether communicative function (CF: reasons for communicating) use differs by socioeconomic status (SES), race/ethnicity, or gender among preschoolers and their mothers.
Method
Mother–preschooler dyads (N = 95) from the National Center for Early Development and Learning's (2005) study of family and social environments were observed during 1 structured learning and free-play interaction. CFs were coded by trained independent raters.
Results
Children used all CFs at similar rates, but those from low SES homes produced fewer utterances and less reasoning, whereas boys used less self-maintaining and more predicting. African American mothers produced more directing and less responding than European American and Latino American mothers, and Latino American mothers produced more utterances than European American mothers. Mothers from low SES homes did more directing and less responding.
Conclusions
Mothers exhibited more sociocultural differences in CFs than children; this suggests that maternal demographic characteristics may influence CF production more than child demographics at school entry. Children from low SES homes talking less and boys producing less self-maintaining coincided with patterns previously detected in pragmatic literature. Overall, preschoolers from racial/ethnic minority and low SES homes were not less deft with CF usage, which may inform how their pragmatic skills are described.
Supplemental Material
https://doi.org/10.23641/asha.5890255from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oqc9k1
via IFTTT
Dysarthria in Mandarin-Speaking Children With Cerebral Palsy: Speech Subsystem Profiles
Purpose
This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences.
Method
Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach. Acoustic and perceptual variables reflecting 3 speech subsystems (articulatory-phonetic, phonatory, and prosodic), and speech intelligibility, were measured based on speech samples obtained from the Test of Children's Speech Intelligibility in Mandarin (developed in the lab for the purpose of this research).
Results
The CP and TD children differed in several aspects of speech subsystem function. Speech intelligibility scores in children with CP were influenced by all 3 speech subsystems, but articulatory-phonetic variables had the highest correlation with word intelligibility. All 3 subsystems influenced sentence intelligibility.
Conclusion
Children with CP demonstrated deficits in speech intelligibility and articulation compared with TD children. Better speech sound articulation influenced higher word intelligibility, but did not benefit sentence intelligibility.from #Audiology via ola Kala on Inoreader http://ift.tt/2CF5ac0
via IFTTT
Metapragmatic Explicitation and Social Attribution in Social Communication Disorder and Developmental Language Disorder: A Comparative Study
Purpose
The purposes of this study are to investigate metapragmatic (MP) ability in 6–11-year-old children with social communication disorder (SCD), developmental language disorder (DLD), and typical language development and to explore factors associated with MP explicitation and social understanding (SU).
Method
In this cross-sectional study, all participants (N = 82) completed an experimental task, the Assessment of Metapragmatics (Collins et al., 2014), in which pragmatic errors are identified in filmed interactions. Responses were scored for complexity/type of explicitation (MP score) and attribution of social characteristics to the films' characters (SU score).
Results
Groups with SCD and DLD had significantly lower MP scores and less sophisticated explicitation than the group with typical language development. After controlling for language and age, the group with SCD had significantly lower SU scores than the group with DLD. Significant correlations were found between MP scores and age/language ability but not with pragmatic impairment.
Conclusions
Children with SCD or DLD performed poorly on an MP task compared with children who are typically developing but do not differ from each other in ability to reflect verbally on pragmatic features in interactions. MP ability appears to be closely related to structural language ability. The limited ability of children with SCD to attribute social/psychological states to interlocutors may indicate additional social attribution limitations.from #Audiology via ola Kala on Inoreader http://ift.tt/2EOBCOY
via IFTTT
Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease
Purpose
The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility.
Method
Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility.
Results
Overall, lower-frequency STN-DBS stimulation (60 Hz) was found to lead to improvements in intelligibility and acoustic vowel expansion. An interaction between speaker sex and STN-DBS stimulation was found for vowel measures. The combination of low frequency, mid to high voltage, and low to mid pulse width led to optimal speech outcomes; however, these settings did not demonstrate significant speech outcome differences compared with the standard clinical STN-DBS settings, likely due to substantial individual variability.
Conclusions
Although lower-frequency STN-DBS stimulation was found to yield consistent improvements in speech outcomes, it was not found to necessarily lead to the best speech outcomes for all participants. Nevertheless, frequency may serve as a starting point to explore settings that will optimize an individual's speech outcomes following STN-DBS surgery.
Supplemental Material
https://doi.org/10.23641/asha.5899228from #Audiology via ola Kala on Inoreader http://ift.tt/2CF55oI
via IFTTT
Dysarthria in Mandarin-Speaking Children With Cerebral Palsy: Speech Subsystem Profiles
Purpose
This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences.
Method
Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach. Acoustic and perceptual variables reflecting 3 speech subsystems (articulatory-phonetic, phonatory, and prosodic), and speech intelligibility, were measured based on speech samples obtained from the Test of Children's Speech Intelligibility in Mandarin (developed in the lab for the purpose of this research).
Results
The CP and TD children differed in several aspects of speech subsystem function. Speech intelligibility scores in children with CP were influenced by all 3 speech subsystems, but articulatory-phonetic variables had the highest correlation with word intelligibility. All 3 subsystems influenced sentence intelligibility.
Conclusion
Children with CP demonstrated deficits in speech intelligibility and articulation compared with TD children. Better speech sound articulation influenced higher word intelligibility, but did not benefit sentence intelligibility.from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CF5ac0
via IFTTT
Metapragmatic Explicitation and Social Attribution in Social Communication Disorder and Developmental Language Disorder: A Comparative Study
Purpose
The purposes of this study are to investigate metapragmatic (MP) ability in 6–11-year-old children with social communication disorder (SCD), developmental language disorder (DLD), and typical language development and to explore factors associated with MP explicitation and social understanding (SU).
Method
In this cross-sectional study, all participants (N = 82) completed an experimental task, the Assessment of Metapragmatics (Collins et al., 2014), in which pragmatic errors are identified in filmed interactions. Responses were scored for complexity/type of explicitation (MP score) and attribution of social characteristics to the films' characters (SU score).
Results
Groups with SCD and DLD had significantly lower MP scores and less sophisticated explicitation than the group with typical language development. After controlling for language and age, the group with SCD had significantly lower SU scores than the group with DLD. Significant correlations were found between MP scores and age/language ability but not with pragmatic impairment.
Conclusions
Children with SCD or DLD performed poorly on an MP task compared with children who are typically developing but do not differ from each other in ability to reflect verbally on pragmatic features in interactions. MP ability appears to be closely related to structural language ability. The limited ability of children with SCD to attribute social/psychological states to interlocutors may indicate additional social attribution limitations.from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EOBCOY
via IFTTT
Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease
Purpose
The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility.
Method
Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility.
Results
Overall, lower-frequency STN-DBS stimulation (60 Hz) was found to lead to improvements in intelligibility and acoustic vowel expansion. An interaction between speaker sex and STN-DBS stimulation was found for vowel measures. The combination of low frequency, mid to high voltage, and low to mid pulse width led to optimal speech outcomes; however, these settings did not demonstrate significant speech outcome differences compared with the standard clinical STN-DBS settings, likely due to substantial individual variability.
Conclusions
Although lower-frequency STN-DBS stimulation was found to yield consistent improvements in speech outcomes, it was not found to necessarily lead to the best speech outcomes for all participants. Nevertheless, frequency may serve as a starting point to explore settings that will optimize an individual's speech outcomes following STN-DBS surgery.
Supplemental Material
https://doi.org/10.23641/asha.5899228from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CF55oI
via IFTTT
Dysarthria in Mandarin-Speaking Children With Cerebral Palsy: Speech Subsystem Profiles
Purpose
This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences.
Method
Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach. Acoustic and perceptual variables reflecting 3 speech subsystems (articulatory-phonetic, phonatory, and prosodic), and speech intelligibility, were measured based on speech samples obtained from the Test of Children's Speech Intelligibility in Mandarin (developed in the lab for the purpose of this research).
Results
The CP and TD children differed in several aspects of speech subsystem function. Speech intelligibility scores in children with CP were influenced by all 3 speech subsystems, but articulatory-phonetic variables had the highest correlation with word intelligibility. All 3 subsystems influenced sentence intelligibility.
Conclusion
Children with CP demonstrated deficits in speech intelligibility and articulation compared with TD children. Better speech sound articulation influenced higher word intelligibility, but did not benefit sentence intelligibility.from #Audiology via ola Kala on Inoreader http://ift.tt/2CF5ac0
via IFTTT
Metapragmatic Explicitation and Social Attribution in Social Communication Disorder and Developmental Language Disorder: A Comparative Study
Purpose
The purposes of this study are to investigate metapragmatic (MP) ability in 6–11-year-old children with social communication disorder (SCD), developmental language disorder (DLD), and typical language development and to explore factors associated with MP explicitation and social understanding (SU).
Method
In this cross-sectional study, all participants (N = 82) completed an experimental task, the Assessment of Metapragmatics (Collins et al., 2014), in which pragmatic errors are identified in filmed interactions. Responses were scored for complexity/type of explicitation (MP score) and attribution of social characteristics to the films' characters (SU score).
Results
Groups with SCD and DLD had significantly lower MP scores and less sophisticated explicitation than the group with typical language development. After controlling for language and age, the group with SCD had significantly lower SU scores than the group with DLD. Significant correlations were found between MP scores and age/language ability but not with pragmatic impairment.
Conclusions
Children with SCD or DLD performed poorly on an MP task compared with children who are typically developing but do not differ from each other in ability to reflect verbally on pragmatic features in interactions. MP ability appears to be closely related to structural language ability. The limited ability of children with SCD to attribute social/psychological states to interlocutors may indicate additional social attribution limitations.from #Audiology via ola Kala on Inoreader http://ift.tt/2EOBCOY
via IFTTT
Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease
Purpose
The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility.
Method
Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility.
Results
Overall, lower-frequency STN-DBS stimulation (60 Hz) was found to lead to improvements in intelligibility and acoustic vowel expansion. An interaction between speaker sex and STN-DBS stimulation was found for vowel measures. The combination of low frequency, mid to high voltage, and low to mid pulse width led to optimal speech outcomes; however, these settings did not demonstrate significant speech outcome differences compared with the standard clinical STN-DBS settings, likely due to substantial individual variability.
Conclusions
Although lower-frequency STN-DBS stimulation was found to yield consistent improvements in speech outcomes, it was not found to necessarily lead to the best speech outcomes for all participants. Nevertheless, frequency may serve as a starting point to explore settings that will optimize an individual's speech outcomes following STN-DBS surgery.
Supplemental Material
https://doi.org/10.23641/asha.5899228from #Audiology via ola Kala on Inoreader http://ift.tt/2CF55oI
via IFTTT
Aging Effects on Leg Joint Variability during Walking with Balance Perturbations
Publication date: Available online 21 February 2018
Source:Gait & Posture
Author(s): Mu Qiao, Jody A. Feld, Jason R. Franz
BackgroundOlder adults are more susceptible to balance perturbations during walking than young adults. However, we lack an individual joint-level understanding of how aging affects the neuromechanical strategies used to accommodate balance perturbations.Research QuestionWe investigated gait phase-dependence in and aging effects on leg joint kinematic variability during walking with balance perturbations. We hypothesized that leg joint variability would: 1) vary across the gait cycle and 2) increase with balance perturbations. We also hypothesized that perturbation effects on leg joint kinematic variability would be larger and more pervasive in older versus young adults.MethodsWe collected leg joint kinematics in young and older adults walking with and without mediolateral optical flow perturbations of different amplitudes.ResultsWe first found that leg joint variability during walking is gait phase-dependent, with step-to-step adjustments occurring predominantly during push-off and early swing. Second, young adults accommodated perturbations almost exclusively by increasing coronal plane hip joint variability, likely to adjust step width. Third, perturbations elicited larger and more pervasive increases in all joint kinematic outcome measures in older adults. Finally, we also provide insight into which joints contribute more to foot placement variability in walking, adding that variability in sagittal plane knee and coronal plane hip joint angles contributed most to that in step length and step width, respectively.SignificanceTaken together, our findings may be highly relevant to identifying specific joint-level therapeutic targets to mitigate balance impairment in our aging population.
from #Audiology via ola Kala on Inoreader http://ift.tt/2GCl7lL
via IFTTT
Source:Gait & Posture
Author(s): Mu Qiao, Jody A. Feld, Jason R. Franz
BackgroundOlder adults are more susceptible to balance perturbations during walking than young adults. However, we lack an individual joint-level understanding of how aging affects the neuromechanical strategies used to accommodate balance perturbations.Research QuestionWe investigated gait phase-dependence in and aging effects on leg joint kinematic variability during walking with balance perturbations. We hypothesized that leg joint variability would: 1) vary across the gait cycle and 2) increase with balance perturbations. We also hypothesized that perturbation effects on leg joint kinematic variability would be larger and more pervasive in older versus young adults.MethodsWe collected leg joint kinematics in young and older adults walking with and without mediolateral optical flow perturbations of different amplitudes.ResultsWe first found that leg joint variability during walking is gait phase-dependent, with step-to-step adjustments occurring predominantly during push-off and early swing. Second, young adults accommodated perturbations almost exclusively by increasing coronal plane hip joint variability, likely to adjust step width. Third, perturbations elicited larger and more pervasive increases in all joint kinematic outcome measures in older adults. Finally, we also provide insight into which joints contribute more to foot placement variability in walking, adding that variability in sagittal plane knee and coronal plane hip joint angles contributed most to that in step length and step width, respectively.SignificanceTaken together, our findings may be highly relevant to identifying specific joint-level therapeutic targets to mitigate balance impairment in our aging population.
from #Audiology via ola Kala on Inoreader http://ift.tt/2GCl7lL
via IFTTT
Aging Effects on Leg Joint Variability during Walking with Balance Perturbations
Publication date: Available online 21 February 2018
Source:Gait & Posture
Author(s): Mu Qiao, Jody A. Feld, Jason R. Franz
BackgroundOlder adults are more susceptible to balance perturbations during walking than young adults. However, we lack an individual joint-level understanding of how aging affects the neuromechanical strategies used to accommodate balance perturbations.Research QuestionWe investigated gait phase-dependence in and aging effects on leg joint kinematic variability during walking with balance perturbations. We hypothesized that leg joint variability would: 1) vary across the gait cycle and 2) increase with balance perturbations. We also hypothesized that perturbation effects on leg joint kinematic variability would be larger and more pervasive in older versus young adults.MethodsWe collected leg joint kinematics in young and older adults walking with and without mediolateral optical flow perturbations of different amplitudes.ResultsWe first found that leg joint variability during walking is gait phase-dependent, with step-to-step adjustments occurring predominantly during push-off and early swing. Second, young adults accommodated perturbations almost exclusively by increasing coronal plane hip joint variability, likely to adjust step width. Third, perturbations elicited larger and more pervasive increases in all joint kinematic outcome measures in older adults. Finally, we also provide insight into which joints contribute more to foot placement variability in walking, adding that variability in sagittal plane knee and coronal plane hip joint angles contributed most to that in step length and step width, respectively.SignificanceTaken together, our findings may be highly relevant to identifying specific joint-level therapeutic targets to mitigate balance impairment in our aging population.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GCl7lL
via IFTTT
Source:Gait & Posture
Author(s): Mu Qiao, Jody A. Feld, Jason R. Franz
BackgroundOlder adults are more susceptible to balance perturbations during walking than young adults. However, we lack an individual joint-level understanding of how aging affects the neuromechanical strategies used to accommodate balance perturbations.Research QuestionWe investigated gait phase-dependence in and aging effects on leg joint kinematic variability during walking with balance perturbations. We hypothesized that leg joint variability would: 1) vary across the gait cycle and 2) increase with balance perturbations. We also hypothesized that perturbation effects on leg joint kinematic variability would be larger and more pervasive in older versus young adults.MethodsWe collected leg joint kinematics in young and older adults walking with and without mediolateral optical flow perturbations of different amplitudes.ResultsWe first found that leg joint variability during walking is gait phase-dependent, with step-to-step adjustments occurring predominantly during push-off and early swing. Second, young adults accommodated perturbations almost exclusively by increasing coronal plane hip joint variability, likely to adjust step width. Third, perturbations elicited larger and more pervasive increases in all joint kinematic outcome measures in older adults. Finally, we also provide insight into which joints contribute more to foot placement variability in walking, adding that variability in sagittal plane knee and coronal plane hip joint angles contributed most to that in step length and step width, respectively.SignificanceTaken together, our findings may be highly relevant to identifying specific joint-level therapeutic targets to mitigate balance impairment in our aging population.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GCl7lL
via IFTTT
Aging Effects on Leg Joint Variability during Walking with Balance Perturbations
Publication date: Available online 21 February 2018
Source:Gait & Posture
Author(s): Mu Qiao, Jody A. Feld, Jason R. Franz
BackgroundOlder adults are more susceptible to balance perturbations during walking than young adults. However, we lack an individual joint-level understanding of how aging affects the neuromechanical strategies used to accommodate balance perturbations.Research QuestionWe investigated gait phase-dependence in and aging effects on leg joint kinematic variability during walking with balance perturbations. We hypothesized that leg joint variability would: 1) vary across the gait cycle and 2) increase with balance perturbations. We also hypothesized that perturbation effects on leg joint kinematic variability would be larger and more pervasive in older versus young adults.MethodsWe collected leg joint kinematics in young and older adults walking with and without mediolateral optical flow perturbations of different amplitudes.ResultsWe first found that leg joint variability during walking is gait phase-dependent, with step-to-step adjustments occurring predominantly during push-off and early swing. Second, young adults accommodated perturbations almost exclusively by increasing coronal plane hip joint variability, likely to adjust step width. Third, perturbations elicited larger and more pervasive increases in all joint kinematic outcome measures in older adults. Finally, we also provide insight into which joints contribute more to foot placement variability in walking, adding that variability in sagittal plane knee and coronal plane hip joint angles contributed most to that in step length and step width, respectively.SignificanceTaken together, our findings may be highly relevant to identifying specific joint-level therapeutic targets to mitigate balance impairment in our aging population.
from #Audiology via ola Kala on Inoreader http://ift.tt/2GCl7lL
via IFTTT
Source:Gait & Posture
Author(s): Mu Qiao, Jody A. Feld, Jason R. Franz
BackgroundOlder adults are more susceptible to balance perturbations during walking than young adults. However, we lack an individual joint-level understanding of how aging affects the neuromechanical strategies used to accommodate balance perturbations.Research QuestionWe investigated gait phase-dependence in and aging effects on leg joint kinematic variability during walking with balance perturbations. We hypothesized that leg joint variability would: 1) vary across the gait cycle and 2) increase with balance perturbations. We also hypothesized that perturbation effects on leg joint kinematic variability would be larger and more pervasive in older versus young adults.MethodsWe collected leg joint kinematics in young and older adults walking with and without mediolateral optical flow perturbations of different amplitudes.ResultsWe first found that leg joint variability during walking is gait phase-dependent, with step-to-step adjustments occurring predominantly during push-off and early swing. Second, young adults accommodated perturbations almost exclusively by increasing coronal plane hip joint variability, likely to adjust step width. Third, perturbations elicited larger and more pervasive increases in all joint kinematic outcome measures in older adults. Finally, we also provide insight into which joints contribute more to foot placement variability in walking, adding that variability in sagittal plane knee and coronal plane hip joint angles contributed most to that in step length and step width, respectively.SignificanceTaken together, our findings may be highly relevant to identifying specific joint-level therapeutic targets to mitigate balance impairment in our aging population.
from #Audiology via ola Kala on Inoreader http://ift.tt/2GCl7lL
via IFTTT
Pure-Tone Audiometry With Forward Pressure Level Calibration Leads to Clinically-Relevant Improvements in Test–Retest Reliability
Objectives: Clinical pure-tone audiometry is conducted using stimuli delivered through supra-aural headphones or insert earphones. The stimuli are calibrated in an acoustic (average ear) coupler. Deviations in individual-ear acoustics from the coupler acoustics affect test validity, and variations in probe insertion and headphone placement affect both test validity and test–retest reliability. Using an insert earphone designed for otoacoustic emission testing, which contains a microphone and loudspeaker, an individualized in-the-ear calibration can be calculated from the ear-canal sound pressure measured at the microphone. However, the total sound pressure level (SPL) measured at the microphone may be affected by standing-wave nulls at higher frequencies, producing errors in stimulus level of up to 20 dB. An alternative is to calibrate using the forward pressure level (FPL) component, which is derived from the total SPL using a wideband acoustic immittance measurement, and represents the pressure wave incident on the eardrum. The objective of this study is to establish test–retest reliability for FPL calibration of pure-tone audiometry stimuli, compared with in-the-ear and coupler sound pressure calibrations. Design: The authors compared standard audiometry using a modern clinical audiometer with TDH-39P supra-aural headphones calibrated in a coupler to a prototype audiometer with an ER10C earphone calibrated three ways: (1) in-the-ear using the total SPL at the microphone, (2) in-the-ear using the FPL at the microphone, and (3) in a coupler (all three are derived from the same measurement). The test procedure was similar to that commonly used in hearing-conservation programs, using pulsed-tone test frequencies at 0.5, 1, 2, 3, 4, 6, and 8 kHz, and an automated modified Hughson-Westlake audiometric procedure. Fifteen adult human participants with normal to mildly-impaired hearing were selected, and one ear from each was tested. Participants completed 10 audiograms on each system, with test-order randomly varied and with headphones and earphones refitted by the tester between tests. Results: Fourteen of 15 ears had standing-wave nulls present between 4 and 8 kHz. The mean intrasubject SD at 6 and 8 kHz was lowest for the FPL calibration, and was comparable with the low-frequency reliability across calibration methods. This decrease in variability translates to statistically-derived significant threshold shift criteria indicating that 15 dB shifts in hearing can be reliably detected at 6 and 8 kHz using FPL-calibrated ER10C earphones, compared with 20 to 25 dB shifts using standard TDH-39P headphones with a coupler calibration. Conclusions: These results indicate that reliability is better with insert earphones, especially with in-the-ear FPL calibration, compared with a standard clinical audiometer with supra-aural headphones. However, in-the-ear SPL calibration should not be used due to its sensitivity to standing waves. The improvement in reliability is clinically meaningful, potentially allowing hearing-conservation programs to more confidently determine significant threshold shifts at 6 kHz—a key frequency for the early detection of noise-induced hearing loss. Acknowledgments: The authors thank Pat Jeng for advice on experimental design; Lynne Marshall for advice on the experimental protocols and for reviewing an earlier version of the manuscript; Bill Ahroon (U.S. Army Aeromedical Research Laboratory) for the loan of an Army-issue audiometer; Laurie Heller for statistical advice; Rob Withnell for sharing data; and to Kurt Yankaskas, Program Officer for Noise Induced Hearing Loss at the Office of Naval Research, for his support. This article was supported by small business innovation research awards to Mimosa Acoustics from the Office of the Secretary of Defense under the contract number N00014-15-C-0046 and the Defense Health Program under the contract number W81XWH-16-C-0185. Portions of this article were presented at the 43rd Annual AAS Scientific and Technology Conference of the American Auditory Society, Scottsdale, AZ. The content of this report is solely the responsibility of the authors and does not necessarily represent the official views of the Department of Defense or the US Government. J.L.M. and C.M.R. designed the experiment. C.M.R. and Z.D.P. performed the experiment at the Research Laboratory of Electronics at Massachusetts Institute of Technology. J.L.M. analyzed the data. J.L.M. and S.R.R. wrote the article. The authors have no conflicts of interest to disclose. Address for correspondence: Judi A. Lapsley Miller, Mimosa Acoustics, 335 Fremont Street, Champaign, IL 61820, USA. E-mail: judi@mimosaacoustics.com Received January 5, 2017; accepted December 21, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.
from #Audiology via ola Kala on Inoreader http://ift.tt/2CDowhH
via IFTTT
from #Audiology via ola Kala on Inoreader http://ift.tt/2CDowhH
via IFTTT
Pure-Tone Audiometry With Forward Pressure Level Calibration Leads to Clinically-Relevant Improvements in Test–Retest Reliability
Objectives: Clinical pure-tone audiometry is conducted using stimuli delivered through supra-aural headphones or insert earphones. The stimuli are calibrated in an acoustic (average ear) coupler. Deviations in individual-ear acoustics from the coupler acoustics affect test validity, and variations in probe insertion and headphone placement affect both test validity and test–retest reliability. Using an insert earphone designed for otoacoustic emission testing, which contains a microphone and loudspeaker, an individualized in-the-ear calibration can be calculated from the ear-canal sound pressure measured at the microphone. However, the total sound pressure level (SPL) measured at the microphone may be affected by standing-wave nulls at higher frequencies, producing errors in stimulus level of up to 20 dB. An alternative is to calibrate using the forward pressure level (FPL) component, which is derived from the total SPL using a wideband acoustic immittance measurement, and represents the pressure wave incident on the eardrum. The objective of this study is to establish test–retest reliability for FPL calibration of pure-tone audiometry stimuli, compared with in-the-ear and coupler sound pressure calibrations. Design: The authors compared standard audiometry using a modern clinical audiometer with TDH-39P supra-aural headphones calibrated in a coupler to a prototype audiometer with an ER10C earphone calibrated three ways: (1) in-the-ear using the total SPL at the microphone, (2) in-the-ear using the FPL at the microphone, and (3) in a coupler (all three are derived from the same measurement). The test procedure was similar to that commonly used in hearing-conservation programs, using pulsed-tone test frequencies at 0.5, 1, 2, 3, 4, 6, and 8 kHz, and an automated modified Hughson-Westlake audiometric procedure. Fifteen adult human participants with normal to mildly-impaired hearing were selected, and one ear from each was tested. Participants completed 10 audiograms on each system, with test-order randomly varied and with headphones and earphones refitted by the tester between tests. Results: Fourteen of 15 ears had standing-wave nulls present between 4 and 8 kHz. The mean intrasubject SD at 6 and 8 kHz was lowest for the FPL calibration, and was comparable with the low-frequency reliability across calibration methods. This decrease in variability translates to statistically-derived significant threshold shift criteria indicating that 15 dB shifts in hearing can be reliably detected at 6 and 8 kHz using FPL-calibrated ER10C earphones, compared with 20 to 25 dB shifts using standard TDH-39P headphones with a coupler calibration. Conclusions: These results indicate that reliability is better with insert earphones, especially with in-the-ear FPL calibration, compared with a standard clinical audiometer with supra-aural headphones. However, in-the-ear SPL calibration should not be used due to its sensitivity to standing waves. The improvement in reliability is clinically meaningful, potentially allowing hearing-conservation programs to more confidently determine significant threshold shifts at 6 kHz—a key frequency for the early detection of noise-induced hearing loss. Acknowledgments: The authors thank Pat Jeng for advice on experimental design; Lynne Marshall for advice on the experimental protocols and for reviewing an earlier version of the manuscript; Bill Ahroon (U.S. Army Aeromedical Research Laboratory) for the loan of an Army-issue audiometer; Laurie Heller for statistical advice; Rob Withnell for sharing data; and to Kurt Yankaskas, Program Officer for Noise Induced Hearing Loss at the Office of Naval Research, for his support. This article was supported by small business innovation research awards to Mimosa Acoustics from the Office of the Secretary of Defense under the contract number N00014-15-C-0046 and the Defense Health Program under the contract number W81XWH-16-C-0185. Portions of this article were presented at the 43rd Annual AAS Scientific and Technology Conference of the American Auditory Society, Scottsdale, AZ. The content of this report is solely the responsibility of the authors and does not necessarily represent the official views of the Department of Defense or the US Government. J.L.M. and C.M.R. designed the experiment. C.M.R. and Z.D.P. performed the experiment at the Research Laboratory of Electronics at Massachusetts Institute of Technology. J.L.M. analyzed the data. J.L.M. and S.R.R. wrote the article. The authors have no conflicts of interest to disclose. Address for correspondence: Judi A. Lapsley Miller, Mimosa Acoustics, 335 Fremont Street, Champaign, IL 61820, USA. E-mail: judi@mimosaacoustics.com Received January 5, 2017; accepted December 21, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.
from #Audiology via ola Kala on Inoreader http://ift.tt/2CDowhH
via IFTTT
from #Audiology via ola Kala on Inoreader http://ift.tt/2CDowhH
via IFTTT
Pure-Tone Audiometry With Forward Pressure Level Calibration Leads to Clinically-Relevant Improvements in Test–Retest Reliability
Objectives: Clinical pure-tone audiometry is conducted using stimuli delivered through supra-aural headphones or insert earphones. The stimuli are calibrated in an acoustic (average ear) coupler. Deviations in individual-ear acoustics from the coupler acoustics affect test validity, and variations in probe insertion and headphone placement affect both test validity and test–retest reliability. Using an insert earphone designed for otoacoustic emission testing, which contains a microphone and loudspeaker, an individualized in-the-ear calibration can be calculated from the ear-canal sound pressure measured at the microphone. However, the total sound pressure level (SPL) measured at the microphone may be affected by standing-wave nulls at higher frequencies, producing errors in stimulus level of up to 20 dB. An alternative is to calibrate using the forward pressure level (FPL) component, which is derived from the total SPL using a wideband acoustic immittance measurement, and represents the pressure wave incident on the eardrum. The objective of this study is to establish test–retest reliability for FPL calibration of pure-tone audiometry stimuli, compared with in-the-ear and coupler sound pressure calibrations. Design: The authors compared standard audiometry using a modern clinical audiometer with TDH-39P supra-aural headphones calibrated in a coupler to a prototype audiometer with an ER10C earphone calibrated three ways: (1) in-the-ear using the total SPL at the microphone, (2) in-the-ear using the FPL at the microphone, and (3) in a coupler (all three are derived from the same measurement). The test procedure was similar to that commonly used in hearing-conservation programs, using pulsed-tone test frequencies at 0.5, 1, 2, 3, 4, 6, and 8 kHz, and an automated modified Hughson-Westlake audiometric procedure. Fifteen adult human participants with normal to mildly-impaired hearing were selected, and one ear from each was tested. Participants completed 10 audiograms on each system, with test-order randomly varied and with headphones and earphones refitted by the tester between tests. Results: Fourteen of 15 ears had standing-wave nulls present between 4 and 8 kHz. The mean intrasubject SD at 6 and 8 kHz was lowest for the FPL calibration, and was comparable with the low-frequency reliability across calibration methods. This decrease in variability translates to statistically-derived significant threshold shift criteria indicating that 15 dB shifts in hearing can be reliably detected at 6 and 8 kHz using FPL-calibrated ER10C earphones, compared with 20 to 25 dB shifts using standard TDH-39P headphones with a coupler calibration. Conclusions: These results indicate that reliability is better with insert earphones, especially with in-the-ear FPL calibration, compared with a standard clinical audiometer with supra-aural headphones. However, in-the-ear SPL calibration should not be used due to its sensitivity to standing waves. The improvement in reliability is clinically meaningful, potentially allowing hearing-conservation programs to more confidently determine significant threshold shifts at 6 kHz—a key frequency for the early detection of noise-induced hearing loss. Acknowledgments: The authors thank Pat Jeng for advice on experimental design; Lynne Marshall for advice on the experimental protocols and for reviewing an earlier version of the manuscript; Bill Ahroon (U.S. Army Aeromedical Research Laboratory) for the loan of an Army-issue audiometer; Laurie Heller for statistical advice; Rob Withnell for sharing data; and to Kurt Yankaskas, Program Officer for Noise Induced Hearing Loss at the Office of Naval Research, for his support. This article was supported by small business innovation research awards to Mimosa Acoustics from the Office of the Secretary of Defense under the contract number N00014-15-C-0046 and the Defense Health Program under the contract number W81XWH-16-C-0185. Portions of this article were presented at the 43rd Annual AAS Scientific and Technology Conference of the American Auditory Society, Scottsdale, AZ. The content of this report is solely the responsibility of the authors and does not necessarily represent the official views of the Department of Defense or the US Government. J.L.M. and C.M.R. designed the experiment. C.M.R. and Z.D.P. performed the experiment at the Research Laboratory of Electronics at Massachusetts Institute of Technology. J.L.M. analyzed the data. J.L.M. and S.R.R. wrote the article. The authors have no conflicts of interest to disclose. Address for correspondence: Judi A. Lapsley Miller, Mimosa Acoustics, 335 Fremont Street, Champaign, IL 61820, USA. E-mail: judi@mimosaacoustics.com Received January 5, 2017; accepted December 21, 2017. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CDowhH
via IFTTT
from #Audiology via xlomafota13 on Inoreader http://ift.tt/2CDowhH
via IFTTT
Εγγραφή σε:
Αναρτήσεις (Atom)