Τρίτη 6 Φεβρουαρίου 2018

Can Improved Cardiovascular Health Enhance Auditory Function?

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EpR9UE
via IFTTT

Tinnitus, Hyperacusis, and the Autonomic Nervous System

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E9SQ5m
via IFTTT

On the Heels of the World Health Assembly 2017 Resolution

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2El6FRL
via IFTTT

Auditory Processing Assessment Model for Older Patients

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6ZObg
via IFTTT

Improving the Quality of Life of Tinnitus Patients

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2BZOc87
via IFTTT

Going for Gold: Inspiration from Athletes with Hearing Loss

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2El6ypj
via IFTTT

Considerations for Culturally Sensitive Hearing Care

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6ZM38
via IFTTT

Updates on Unilateral Hearing Loss

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EmIVwu
via IFTTT

Hearing Loss in Children with Down Syndrome

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6ZLMC
via IFTTT

Symptom: Ear Canal Mass

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EmIMJs
via IFTTT

10 Facts (and a Question) About NIHL and Medical-legal Evaluation

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E84n4V
via IFTTT

Manufacturers News

No abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EmIGl4
via IFTTT

Can Improved Cardiovascular Health Enhance Auditory Function?

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EpR9UE
via IFTTT

Tinnitus, Hyperacusis, and the Autonomic Nervous System

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E9SQ5m
via IFTTT

On the Heels of the World Health Assembly 2017 Resolution

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2El6FRL
via IFTTT

Auditory Processing Assessment Model for Older Patients

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E6ZObg
via IFTTT

Improving the Quality of Life of Tinnitus Patients

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2BZOc87
via IFTTT

Going for Gold: Inspiration from Athletes with Hearing Loss

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2El6ypj
via IFTTT

Considerations for Culturally Sensitive Hearing Care

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E6ZM38
via IFTTT

Updates on Unilateral Hearing Loss

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EmIVwu
via IFTTT

Hearing Loss in Children with Down Syndrome

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E6ZLMC
via IFTTT

Symptom: Ear Canal Mass

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EmIMJs
via IFTTT

10 Facts (and a Question) About NIHL and Medical-legal Evaluation

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E84n4V
via IFTTT

Manufacturers News

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EmIGl4
via IFTTT

Can Improved Cardiovascular Health Enhance Auditory Function?

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EpR9UE
via IFTTT

Tinnitus, Hyperacusis, and the Autonomic Nervous System

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E9SQ5m
via IFTTT

On the Heels of the World Health Assembly 2017 Resolution

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2El6FRL
via IFTTT

Auditory Processing Assessment Model for Older Patients

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6ZObg
via IFTTT

Improving the Quality of Life of Tinnitus Patients

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2BZOc87
via IFTTT

Going for Gold: Inspiration from Athletes with Hearing Loss

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2El6ypj
via IFTTT

Considerations for Culturally Sensitive Hearing Care

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6ZM38
via IFTTT

Updates on Unilateral Hearing Loss

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EmIVwu
via IFTTT

Hearing Loss in Children with Down Syndrome

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6ZLMC
via IFTTT

Symptom: Ear Canal Mass

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EmIMJs
via IFTTT

10 Facts (and a Question) About NIHL and Medical-legal Evaluation

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2E84n4V
via IFTTT

Manufacturers News

No abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/2EmIGl4
via IFTTT

Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

Purpose
The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns.
Method
The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both.
Results
Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN.
Conclusions
CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EqlBhs
via IFTTT

Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development

Purpose
We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant.
Method
Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed).
Results
Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest.
Conclusion
Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E7Z05L
via IFTTT

Cognitive Profiles of Finnish Preschool Children With Expressive and Receptive Language Impairment

Purpose
The aim of this study was to compare the verbal and nonverbal cognitive profiles of children with specific language impairment (SLI) with problems predominantly in expressive (SLI-E) or receptive (SLI-R) language skills. These diagnostic subgroups have not been compared before in psychological studies.
Method
Participants were preschool-age Finnish-speaking children with SLI diagnosed by a multidisciplinary team. Cognitive profile differences between the diagnostic subgroups and the relationship between verbal and nonverbal reasoning skills were evaluated.
Results
Performance was worse for the SLI-R subgroup than for the SLI-E subgroup not only in verbal reasoning and short-term memory but also in nonverbal reasoning, and several nonverbal subtests correlated significantly with the composite verbal index. However, weaknesses and strengths in the cognitive profiles of the subgroups were parallel.
Conclusions
Poor verbal comprehension and reasoning skills seem to be associated with lower nonverbal performance in children with SLI. Performance index (Performance Intelligence Quotient) may not always represent the intact nonverbal capacity assumed in SLI diagnostics, and a broader assessment is recommended when a child fails any of the compulsory Performance Intelligence Quotient subtests. Differences between the SLI subgroups appear quantitative rather than qualitative, in line with the new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM V) classification (American Psychiatric Association, 2013).

from #Audiology via ola Kala on Inoreader http://ift.tt/2Em2uVQ
via IFTTT

Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

Purpose
The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns.
Method
The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both.
Results
Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN.
Conclusions
CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EqlBhs
via IFTTT

Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development

Purpose
We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant.
Method
Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed).
Results
Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest.
Conclusion
Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E7Z05L
via IFTTT

Cognitive Profiles of Finnish Preschool Children With Expressive and Receptive Language Impairment

Purpose
The aim of this study was to compare the verbal and nonverbal cognitive profiles of children with specific language impairment (SLI) with problems predominantly in expressive (SLI-E) or receptive (SLI-R) language skills. These diagnostic subgroups have not been compared before in psychological studies.
Method
Participants were preschool-age Finnish-speaking children with SLI diagnosed by a multidisciplinary team. Cognitive profile differences between the diagnostic subgroups and the relationship between verbal and nonverbal reasoning skills were evaluated.
Results
Performance was worse for the SLI-R subgroup than for the SLI-E subgroup not only in verbal reasoning and short-term memory but also in nonverbal reasoning, and several nonverbal subtests correlated significantly with the composite verbal index. However, weaknesses and strengths in the cognitive profiles of the subgroups were parallel.
Conclusions
Poor verbal comprehension and reasoning skills seem to be associated with lower nonverbal performance in children with SLI. Performance index (Performance Intelligence Quotient) may not always represent the intact nonverbal capacity assumed in SLI diagnostics, and a broader assessment is recommended when a child fails any of the compulsory Performance Intelligence Quotient subtests. Differences between the SLI subgroups appear quantitative rather than qualitative, in line with the new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM V) classification (American Psychiatric Association, 2013).

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Em2uVQ
via IFTTT

Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

Purpose
The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns.
Method
The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both.
Results
Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN.
Conclusions
CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EqlBhs
via IFTTT

Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development

Purpose
We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant.
Method
Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed).
Results
Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest.
Conclusion
Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E7Z05L
via IFTTT

Cognitive Profiles of Finnish Preschool Children With Expressive and Receptive Language Impairment

Purpose
The aim of this study was to compare the verbal and nonverbal cognitive profiles of children with specific language impairment (SLI) with problems predominantly in expressive (SLI-E) or receptive (SLI-R) language skills. These diagnostic subgroups have not been compared before in psychological studies.
Method
Participants were preschool-age Finnish-speaking children with SLI diagnosed by a multidisciplinary team. Cognitive profile differences between the diagnostic subgroups and the relationship between verbal and nonverbal reasoning skills were evaluated.
Results
Performance was worse for the SLI-R subgroup than for the SLI-E subgroup not only in verbal reasoning and short-term memory but also in nonverbal reasoning, and several nonverbal subtests correlated significantly with the composite verbal index. However, weaknesses and strengths in the cognitive profiles of the subgroups were parallel.
Conclusions
Poor verbal comprehension and reasoning skills seem to be associated with lower nonverbal performance in children with SLI. Performance index (Performance Intelligence Quotient) may not always represent the intact nonverbal capacity assumed in SLI diagnostics, and a broader assessment is recommended when a child fails any of the compulsory Performance Intelligence Quotient subtests. Differences between the SLI subgroups appear quantitative rather than qualitative, in line with the new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM V) classification (American Psychiatric Association, 2013).

from #Audiology via ola Kala on Inoreader http://ift.tt/2Em2uVQ
via IFTTT

Amazon and Health Care: Next Moves

Amazon announced a major health-care partnership deal with Berkshire Hathaway and JP Morgan Chase on January 30, 2018. It’s no secret that Amazon CEO Jeff Bezos has been thinking about health care since the 1990s, when he took a very hands-on role at Drugstore.com. Apparently, it is still top of mind, and he has enlisted Warren Buffet and Jamie Dimon to focus their attention on employer-sponsored health care. 



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GTvse1
via IFTTT

Elderly with Heart Failure Have Higher Risk of Hearing Loss, Says Research

heartbeat-1892826_1920.jpgA new study shows evidence of the correlation between heart failure and hearing loss among older adults in the United States. Researchers from Weill Cornell Medical College in New York City used data from the 2005-2006 and 2009-2010 National Health and Nutrition Examination Survey examined the prevalence and correlation of hearing loss among the elderly with or without heart conditions.

Study authors  Madeline R. Sterling, MD, MPH,  Frank R. Lin, MD, PhD, Deanna P. Jannat-Khah, DrPH, MSPH, Adele M. Goman, PhD, Sandra E. Echeverria, PhD, and Monika M. Safford, in a research letter, wrote that "Hearing loss is common among older adults in the United States and is associated with coronary heart disease and its risk factors. Yet, the prevalence of hearing loss among adults with heart failure (HF) has not been well described."

The study, titled Hearing Loss Among Older Adults with Heart Failure in the United States, revealed that the participants with heart failure were older, with more existing cardiovascular medical conditions, and had a higher disposition to hearing loss compared with those participants without heart failure. The researchers looked into the survey data of adults aged 70 and older and found out that hearing loss among those with heart failure was more common at 74.4 percent compared with the 63.3 percent among those without coronary comorbidity. Results of the examination also showed that participants with heart failure are more likely susceptible to varying degrees of hearing loss.

 "Although hearing loss was more common among adults with HF compared with those without it, HF was not independently associated with hearing loss after accounting for demographic and clinical characteristics," the research authors explained.

The researchers said that further studies on heart failure-hearing loss link among the elder population in the United States may provide more information. "Future studies might examine potential correlates of hearing loss that we were unable to study, including ejection fraction and HF-specific medications like furosemide, which has ototoxic properties."

 

Published: 2/5/2018 8:27:00 AM


from #Audiology via ola Kala on Inoreader http://ift.tt/2C1jw6c
via IFTTT

Elderly with Heart Failure Have Higher Risk of Hearing Loss, Says Research

heartbeat-1892826_1920.jpgA new study shows evidence of the correlation between heart failure and hearing loss among older adults in the United States. Researchers from Weill Cornell Medical College in New York City used data from the 2005-2006 and 2009-2010 National Health and Nutrition Examination Survey examined the prevalence and correlation of hearing loss among the elderly with or without heart conditions.

Study authors  Madeline R. Sterling, MD, MPH,  Frank R. Lin, MD, PhD, Deanna P. Jannat-Khah, DrPH, MSPH, Adele M. Goman, PhD, Sandra E. Echeverria, PhD, and Monika M. Safford, in a research letter, wrote that "Hearing loss is common among older adults in the United States and is associated with coronary heart disease and its risk factors. Yet, the prevalence of hearing loss among adults with heart failure (HF) has not been well described."

The study, titled Hearing Loss Among Older Adults with Heart Failure in the United States, revealed that the participants with heart failure were older, with more existing cardiovascular medical conditions, and had a higher disposition to hearing loss compared with those participants without heart failure. The researchers looked into the survey data of adults aged 70 and older and found out that hearing loss among those with heart failure was more common at 74.4 percent compared with the 63.3 percent among those without coronary comorbidity. Results of the examination also showed that participants with heart failure are more likely susceptible to varying degrees of hearing loss.

 "Although hearing loss was more common among adults with HF compared with those without it, HF was not independently associated with hearing loss after accounting for demographic and clinical characteristics," the research authors explained.

The researchers said that further studies on heart failure-hearing loss link among the elder population in the United States may provide more information. "Future studies might examine potential correlates of hearing loss that we were unable to study, including ejection fraction and HF-specific medications like furosemide, which has ototoxic properties."

 

Published: 2/5/2018 8:27:00 AM


from #Audiology via ola Kala on Inoreader http://ift.tt/2C1jw6c
via IFTTT

Elderly with Heart Failure Have Higher Risk of Hearing Loss, Says Research

heartbeat-1892826_1920.jpgA new study shows evidence of the correlation between heart failure and hearing loss among older adults in the United States. Researchers from Weill Cornell Medical College in New York City used data from the 2005-2006 and 2009-2010 National Health and Nutrition Examination Survey examined the prevalence and correlation of hearing loss among the elderly with or without heart conditions.

Study authors  Madeline R. Sterling, MD, MPH,  Frank R. Lin, MD, PhD, Deanna P. Jannat-Khah, DrPH, MSPH, Adele M. Goman, PhD, Sandra E. Echeverria, PhD, and Monika M. Safford, in a research letter, wrote that "Hearing loss is common among older adults in the United States and is associated with coronary heart disease and its risk factors. Yet, the prevalence of hearing loss among adults with heart failure (HF) has not been well described."

The study, titled Hearing Loss Among Older Adults with Heart Failure in the United States, revealed that the participants with heart failure were older, with more existing cardiovascular medical conditions, and had a higher disposition to hearing loss compared with those participants without heart failure. The researchers looked into the survey data of adults aged 70 and older and found out that hearing loss among those with heart failure was more common at 74.4 percent compared with the 63.3 percent among those without coronary comorbidity. Results of the examination also showed that participants with heart failure are more likely susceptible to varying degrees of hearing loss.

 "Although hearing loss was more common among adults with HF compared with those without it, HF was not independently associated with hearing loss after accounting for demographic and clinical characteristics," the research authors explained.

The researchers said that further studies on heart failure-hearing loss link among the elder population in the United States may provide more information. "Future studies might examine potential correlates of hearing loss that we were unable to study, including ejection fraction and HF-specific medications like furosemide, which has ototoxic properties."

 

Published: 2/5/2018 8:27:00 AM


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2C1jw6c
via IFTTT

Cervical Vestibular Evoked Myogenic Potential in Hypoglossal Nerve Schwannoma: A Case Report.

Cervical Vestibular Evoked Myogenic Potential in Hypoglossal Nerve Schwannoma: A Case Report.

J Am Acad Audiol. 2018 Feb;29(2):187-191

Authors: Rajasekaran AK, Savardekar AR, Shivashankar NR

Abstract
BACKGROUND: Schwannoma of the hypoglossal nerve is rare. This case report documents an atypical abnormality of the cervical vestibular evoked myogenic potential (cVEMP) in a patient with schwannoma of the hypoglossal nerve. The observed abnormality was attributed to the proximity of the hypoglossal nerve to the spinal accessory nerve in the medullary cistern and base of the skull.
PURPOSE: To report cVEMP abnormality in a patient with hypoglossal nerve schwannoma and provide an anatomical correlation for this abnormality.
RESEARCH DESIGN: Case report.
STUDY SAMPLE: A 44-yr-old woman.
DATA COLLECTION: Pure-tone and speech audiometry, tympanometry, acoustic stapedial reflex, auditory brainstem response, and cVEMP testing were performed.
RESULTS: The audiological test results were normal except for the absence of cVEMP on the lesion side (right).
CONCLUSIONS: A cVEMP abnormality indicating a compromised spinal accessory nerve was observed in a patient with hypoglossal nerve schwannoma. This case report highlights the importance of recording cVEMP in relevant neurological conditions and provides clinical proof for the involvement of the spinal accessory nerve in the vestibulocollic reflex pathway.

PMID: 29401065 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nLUqDt
via IFTTT

Higher Asymmetry Ratio and Refixation Saccades in Individuals with Motion Sickness.

Higher Asymmetry Ratio and Refixation Saccades in Individuals with Motion Sickness.

J Am Acad Audiol. 2018 Feb;29(2):175-186

Authors: Neupane AK, Gururaj K, Sinha SK

Abstract
BACKGROUND: Motion sickness is a complex autonomic phenomenon caused by the intersensory conflict among the balancing systems, resulting in a mismatch of signals between static physical conditions of the susceptible individual exposed to dynamic environment.
PURPOSE: The present study was done to assess the sacculocollic reflex pathway and six semicircular canals in individuals susceptible to motion sickness.
RESEARCH DESIGN: Standard group comparison was used.
STUDY SAMPLE: A total of 60 participants with an age range of 17-25 yr were included, where group I comprised 30 participants with motion sickness and group II comprised 30 participants without motion sickness. The Motion Sickness Susceptibility Questionnaire-Short was administered to classify the participants into groups with or without motion sickness.
DATA COLLECTION AND ANALYSIS: The cervical vestibular-evoked myogenic potential (cVEMP) test and video head impulse test (vHIT) were administered to all participants. The Shapiro-Wilk test revealed normal distribution of the data (p > 0.05). Hence a parametric independent sample t test was done to check significant difference in cVEMP and vHIT parameters between the two groups.
RESULTS: The present study revealed no significant difference for cVEMP latencies and amplitude in individuals with motion sickness. However, significantly higher cVEMP asymmetry ratio was observed in individuals with motion sickness. Though the vestibulo-ocular reflex (VOR) gain values showed no significant difference between the two groups except for the right anterior left posterior plane, the asymmetry in VOR gain values revealed significant difference between the groups, suggesting asymmetry as a better parameter than absolute VOR gain values. Also, the presence of refixation saccades in 100% of the individuals with motion sickness accorded with various studies reported earlier with vestibular-related pathologies.
CONCLUSIONS: Presence of higher asymmetry ratio in cVEMP and vHIT test results plus refixation saccades to stabilize the gaze in vHIT can suggest some amount of vestibular anomalies in individuals with motion sickness.

PMID: 29401064 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E4R3Cl
via IFTTT

Test-Retest Reliability of Dual-Recorded Brainstem versus Cortical Auditory-Evoked Potentials to Speech.

Test-Retest Reliability of Dual-Recorded Brainstem versus Cortical Auditory-Evoked Potentials to Speech.

J Am Acad Audiol. 2018 Feb;29(2):164-174

Authors: Bidelman GM, Pousson M, Dugas C, Fehrenbach A

Abstract
BACKGROUND: Auditory-evoked potentials have proven useful in the objective evaluation of sound encoding at different stages of the auditory pathway (brainstem and cortex). Yet, their utility for use in clinical assessment and empirical research relies critically on the precision and test-retest repeatability of the measure.
PURPOSE: To determine how subcortical/cortical classes of auditory neural responses directly compare in terms of their internal consistency and test-retest reliability within and between listeners.
RESEARCH DESIGN: A descriptive cohort study describing the dispersion of electrophysiological measures.
STUDY SAMPLE: Eight young, normal-hearing female listeners.
DATA COLLECTION AND ANALYSIS: We recorded auditory brainstem responses (ABRs), brainstem frequency-following responses (FFRs), and cortical (P1-N1-P2) auditory-evoked potentials elicited by speech sounds in the same set of listeners. We reassessed responses within each of four different test sessions over a period of 1 mo, allowing us to detect possible changes in latency/amplitude characteristics with finer detail than in previous studies.
RESULTS: Our findings show that brainstem and cortical amplitude/latency measures are remarkably stable; with the exception of slight prolongation of the P1 wave, we found no significant variation in any response measure. Intraclass correlation analysis revealed that the speech-evoked FFR amplitude and latency measures achieved superior repeatability (intraclass correlation coefficient >0.85) among the more widely used obligatory brainstem (ABR) and cortical (P1-N1-P2) auditory-evoked potentials. Contrasting these intersubject effects, intrasubject variability (i.e., within-subject coefficient of variation) revealed that while latencies were more stable than amplitudes, brainstem and cortical responses did not differ in their variability at the single subject level.
CONCLUSIONS: We conclude that (1) the variability of auditory neural responses increases with ascending level along the auditory neuroaxis (cortex > brainstem) between subjects but remains highly stable within subjects and (2) speech-FFRs might provide a more stable measure of auditory function than other conventional responses (e.g., click-ABR), given their lower inter- and intrasubject variability.

PMID: 29401063 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2BGp3DJ
via IFTTT

The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

J Am Acad Audiol. 2018 Feb;29(2):151-163

Authors: Cameron S, Chong-White N, Mealings K, Beechey T, Dillon H, Young T

Abstract
BACKGROUND: Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal amplitude is essential to parse an incoming speech signal into smaller phonological units.
PURPOSE: The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an amplitude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard.
RESEARCH DESIGN: Development of the ParSE stimuli and test protocols, and collection of normative and test-retest reliability data.
STUDY SAMPLE: Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females.
DATA COLLECTION AND ANALYSIS: Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to individual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance.
RESULTS: Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was significant (p < 0.00001). The UBUR ranged from 50% modulation depth for 6-yr-olds to 25% for adults. Children aged 6-10 had significantly higher uncertainty region boundaries than adults. A skewed distribution toward negative performance occurred (p = 0.00007). There was no significant difference in performance on the ParSE between males and females (p = 0.60). Test-retest z scores were strongly correlated (r = 0.68, p < 0.0000001).
CONCLUSIONS: The ParSE normative data show that the ability to identify syllable boundaries based on changes in amplitude modulation improves with age, and that some children in the general population have performance much worse than their age peers. The test is suitable for use in planned studies in a clinical population.

PMID: 29401062 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nNn3QB
via IFTTT

The Phoneme Identification Test for Assessment of Spectral and Temporal Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

The Phoneme Identification Test for Assessment of Spectral and Temporal Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

J Am Acad Audiol. 2018 Feb;29(2):135-150

Authors: Cameron S, Chong-White N, Mealings K, Beechey T, Dillon H, Young T

Abstract
BACKGROUND: Previous research suggests that a proportion of children experiencing reading and listening difficulties may have an underlying primary deficit in the way that the central auditory nervous system analyses the perceptually important, rapidly varying, formant frequency components of speech.
PURPOSE: The Phoneme Identification Test (PIT) was developed to investigate the ability of children to use spectro-temporal cues to perceptually categorize speech sounds based on their rapidly changing formant frequencies. The PIT uses an adaptive two-alternative forced-choice procedure whereby the participant identifies a synthesized consonant-vowel (CV) (/ba/ or /da/) syllable. CV syllables differed only in the second formant (F2) frequency along an 11-step continuum (between 0% and 100%-representing an ideal /ba/ and /da/, respectively). The CV syllables were presented in either quiet (PIT Q) or noise at a 0 dB signal-to-noise ratio (PIT N).
RESEARCH DESIGN: Development of the PIT stimuli and test protocols, and collection of normative and test-retest reliability data.
STUDY SAMPLE: Twelve adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 5 mo) and 137 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 76 females.
DATA COLLECTION AND ANALYSIS: Data were collected using a touchscreen computer. Psychometric functions were automatically fit to individual data by the PIT software. Performance was determined by the width of the continuum for which responses were neither clearly /ba/ nor /da/ (referred to as the uncertainty region [UR]). A shallower psychometric function slope reflected greater uncertainty. Age effects were determined based on raw scores. Z scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance.
RESULTS: Across participants, the median value of the F2 range that resulted in uncertain responses was 33% in quiet and 40% in noise. There was a significant effect of age on the width of this UR (p < 0.00001) in both quiet and noise, with performance becoming adult like by age 9 on the PIT Q and age 10 on the PIT N. A skewed distribution toward negative performance occurred in both quiet (p = 0.01) and noise (p = 0.006). Median UR scores were significantly wider in noise than in quiet (T = 2041, p < 0.0000001). Performance (z scores) across the two tests was significantly correlated (r = 0.36, p = 0.000009). Test-retest z scores were significantly correlated in both quiet and noise (r = 0.4 and 0.37, respectively, p < 0.0001).
CONCLUSIONS: The PIT normative data show that the ability to identify phonemes based on changes in formant transitions improves with age, and that some children in the general population have performance much worse than their age peers. In children, uncertainty increases when the stimuli are presented in noise. The test is suitable for use in planned studies in a clinical population.

PMID: 29401061 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E4Envd
via IFTTT

Exponential Modeling of Frequency-Following Responses in American Neonates and Adults.

Exponential Modeling of Frequency-Following Responses in American Neonates and Adults.

J Am Acad Audiol. 2018 Feb;29(2):125-134

Authors: Jeng FC, Nance B, Montgomery-Reagan K, Lin CD

Abstract
BACKGROUND: The scalp-recorded frequency-following response (FFR) has been widely accepted in assessing the brain's processing of speech stimuli for people who speak tonal and nontonal languages. Characteristics of scalp-recorded FFRs with increasing number of sweeps have been delineated through the use of an exponential curve-fitting model in Chinese adults; however, characteristics of speech processing for people who speak a nontonal language remain unclear.
PURPOSE: This study had two specific aims. The first was to examine the characteristics of speech processing in neonates and adults who speak a nontonal language, to evaluate the goodness of fit of an exponential model on neonatal and adult FFRs, and to determine the differences, if any, between the two groups of participants. The second aim was to assess effective recording parameters for American neonates and adults.
RESEARCH DESIGN: This investigation employed a prospective between-subject study design.
STUDY SAMPLE: A total of 12 American neonates (1-3 days old) and 12 American adults (24.1 ± 2.5 yr old) were recruited. Each neonate passed an automated hearing screening at birth and all adult participants had normal hearing and were native English speakers.
DATA COLLECTION AND ANALYSIS: The English vowel /i/ with a rising pitch contour (117-166 Hz) was used to elicit the FFR. A total of 8,000 accepted sweeps were recorded from each participant. Three objective indices (Frequency Error, Tracking Accuracy, and Pitch Strength) were computed to estimate the frequency-tracking acuity and neural phase-locking magnitude when progressively more sweeps were included in the averaged waveform. For each objective index, the FFR trends were fit to an exponential curve-fitting model that included estimates of asymptotic amplitude, noise amplitude, and a time constant.
RESULTS: Significant differences were observed between groups for Frequency Error, Tracking Accuracy, and Pitch Strength of the FFR trends. The adult participants had significantly smaller Frequency Error (p < 0.001), better Tracking Accuracy (p = 0.001), and larger Pitch Strength (p = 0.003) values than the neonate participants. The adult participants also demonstrated a faster rate of improvement (i.e., a smaller time constant) in all three objective indices compared to the neonate participants. The smaller time constants observed in adults indicate that a larger number of sweeps will be needed to adequately assess the FFR for neonates. Furthermore, the exponential curve-fitting model provided a good fit to the FFR trends with increasing number of sweeps for American neonates (mean r2 = 0.89) and adults (mean r2 = 0.96).
CONCLUSIONS: Significant differences were noted between the neonatal and adult participants for Frequency Error, Tracking Accuracy, and Pitch Strength. These differences have important clinical implications in determining when to stop a recording and the number of sweeps needed to adequately assess the frequency-encoding acuity and neural phase-locking magnitude in neonates and adults. These findings lay an important foundation for establishing a normative database for American neonates and adults, and may prove to be useful in the development of diagnostic and therapeutic paradigms for neonates and adults who speak a nontonal language.

PMID: 29401060 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nNa9Ch
via IFTTT

Survey of Current Practice in the Fitting and Fine-Tuning of Common Signal-Processing Features in Hearing Aids for Adults.

Survey of Current Practice in the Fitting and Fine-Tuning of Common Signal-Processing Features in Hearing Aids for Adults.

J Am Acad Audiol. 2018 Feb;29(2):118-124

Authors: Anderson MC, Arehart KH, Souza PE

Abstract
BACKGROUND: Current guidelines for adult hearing aid fittings recommend the use of a prescriptive fitting rationale with real-ear verification that considers the audiogram for the determination of frequency-specific gain and ratios for wide dynamic range compression. However, the guidelines lack recommendations for how other common signal-processing features (e.g., noise reduction, frequency lowering, directional microphones) should be considered during the provision of hearing aid fittings and fine-tunings for adult patients.
PURPOSE: The purpose of this survey was to identify how audiologists make clinical decisions regarding common signal-processing features for hearing aid provision in adults.
RESEARCH DESIGN: An online survey was sent to audiologists across the United States. The 22 survey questions addressed four primary topics including demographics of the responding audiologists, factors affecting selection of hearing aid devices, the approaches used in the fitting of signal-processing features, and the strategies used in the fine-tuning of these features.
STUDY SAMPLE: A total of 251 audiologists who provide hearing aid fittings to adults completed the electronically distributed survey. The respondents worked in a variety of settings including private practice, physician offices, university clinics, and hospitals/medical centers.
DATA COLLECTION AND ANALYSIS: Data analysis was based on a qualitative analysis of the question responses. The survey results for each of the four topic areas (demographics, device selection, hearing aid fitting, and hearing aid fine-tuning) are summarized descriptively.
RESULTS: Survey responses indicate that audiologists vary in the procedures they use in fitting and fine-tuning based on the specific feature, such that the approaches used for the fitting of frequency-specific gain differ from other types of features (i.e., compression time constants, frequency lowering parameters, noise reduction strength, directional microphones, feedback management). Audiologists commonly rely on prescriptive fitting formulas and probe microphone measures for the fitting of frequency-specific gain and rely on manufacturers' default settings and recommendations for both the initial fitting and the fine-tuning of signal-processing features other than frequency-specific gain.
CONCLUSIONS: The survey results are consistent with a lack of published protocols and guidelines for fitting and adjusting signal-processing features beyond frequency-specific gain. To streamline current practice, a transparent evidence-based tool that enables clinicians to prescribe the setting of other features from individual patient characteristics would be desirable.

PMID: 29401059 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E6pqci
via IFTTT

Predictive Accuracy of Sweep Frequency Impedance Technology in Identifying Conductive Conditions in Newborns.

Predictive Accuracy of Sweep Frequency Impedance Technology in Identifying Conductive Conditions in Newborns.

J Am Acad Audiol. 2018 Feb;29(2):106-117

Authors: Aithal V, Kei J, Driscoll C, Murakoshi M, Wada H

Abstract
BACKGROUND: Diagnosing conductive conditions in newborns is challenging for both audiologists and otolaryngologists. Although high-frequency tympanometry (HFT), acoustic stapedial reflex tests, and wideband absorbance measures are useful diagnostic tools, there is performance measure variability in their detection of middle ear conditions. Additional diagnostic sensitivity and specificity measures gained through new technology such as sweep frequency impedance (SFI) measures may assist in the diagnosis of middle ear dysfunction in newborns.
PURPOSE: The purpose of this study was to determine the test performance of SFI to predict the status of the outer and middle ear in newborns against commonly used reference standards.
RESEARCH DESIGN: Automated auditory brainstem response (AABR), HFT (1000 Hz), transient evoked otoacoustic emission (TEOAE), distortion product otoacoustic emission (DPOAE), and SFI tests were administered to the study sample.
STUDY SAMPLE: A total of 188 neonates (98 males and 90 females) with a mean gestational age of 39.4 weeks were included in the sample. Mean age at the time of testing was 44.4 hr.
DATA COLLECTION AND ANALYSIS: Diagnostic accuracy of SFI was assessed in terms of its ability to identify conductive conditions in neonates when compared with nine different reference standards (including four single tests [AABR, HFT, TEOAE, and DPOAE] and five test batteries [HFT + DPOAE, HFT + TEOAE, DPOAE + TEOAE, DPOAE + AABR, and TEOAE + AABR]), using receiver operating characteristic (ROC) analysis and traditional test performance measures such as sensitivity and specificity.
RESULTS: The test performance of SFI against the test battery reference standard of HFT + DPOAE and single reference standard of HFT was high with an area under the ROC curve (AROC) of 0.87 and 0.82, respectively. Although the HFT + DPOAE test battery reference standard performed better than the HFT reference standard in predicting middle ear conductive conditions in neonates, the difference in AROC was not significant. Further analysis revealed that the highest sensitivity and specificity for SFI (86% and 88%, respectively) was obtained when compared with the reference standard of HFT + DPOAE. Among the four single reference standards, SFI had the highest sensitivity and specificity (76% and 88%, respectively) when compared against the HFT reference standard.
CONCLUSIONS: The high test performance of SFI against the HFT and HFT + DPOAE reference standards indicates that the SFI measure has appropriate diagnostic accuracy in detection of conductive conditions in newborns. Hence, the SFI test could be used as adjunct tool to identify conductive conditions in universal newborn hearing screening programs, and can also be used in diagnostic follow-up assessments.

PMID: 29401058 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nPR6a6
via IFTTT

Effects of Early- and Late-Arriving Room Reflections on the Speech-Evoked Auditory Brainstem Response.

Effects of Early- and Late-Arriving Room Reflections on the Speech-Evoked Auditory Brainstem Response.

J Am Acad Audiol. 2018 Feb;29(2):95-105

Authors: Al Osman R, Giguère C, Dajani HR

Abstract
BACKGROUND: Room reverberation alters the acoustical properties of the speech signals reaching our ears, affecting speech understanding. Therefore, it is important to understand the consequences of reverberation on auditory processing. In perceptual studies, the direct sound and early reflections of reverberated speech have been found to constitute useful energy, whereas the late reflections constitute detrimental energy.
PURPOSE: This study investigated how various components (direct sound versus early reflections versus late reflections) of the reverberated speech are encoded in the auditory system using the speech-evoked auditory brainstem response (ABR).
RESEARCH DESIGN: Speech-evoked ABRs were recorded using reverberant stimuli created as a result of the convolution between an ongoing synthetic vowel /a/ and each of the following room impulse response (RIR) components: direct sound, early reflections, late reflections, and full reverberation. Four stimuli were produced: direct component, early component, late component, and full component.
STUDY SAMPLE: Twelve participants with normal hearing participated in this study.
DATA COLLECTION AND ANALYSIS: Waves V and A amplitudes and latencies as well as envelope-following response (EFR) and fine structure frequency-following response (FFR) amplitudes of the speech-evoked ABR were evaluated separately with one-way repeated measures analysis of variances to determine the effect of stimulus. Post hoc comparisons using Tukey's honestly significant difference test were performed to assess significant differences between pairs of stimulus conditions.
RESULTS: For waves V and A amplitudes, a significant difference or trend toward significance was found between direct and late components, between direct and full components, and between early and late components. For waves V and A latencies, significant differences were found between direct and late components, between direct and full components, between early and late components, and between early and full components. For the EFR and FFR amplitudes, a significant difference or trend toward significance was found between direct and late components, and between early and late components. Moreover, eight, three, and one participant reported the early, full, and late stimuli, respectively, to be the most perceptually similar to the direct stimulus.
CONCLUSIONS: The stimuli that are acoustically most similar (direct and early) result in electrophysiological responses that are not significantly different, whereas the stimuli that are acoustically most different (direct and late, early and late) result in responses that are significantly different across all response measures. These findings provide insights toward the understanding of the effects of the different components of the RIRs on auditory processing of speech.

PMID: 29401057 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E4QUyN
via IFTTT

Cervical Vestibular Evoked Myogenic Potentials and Hypoglossal Nerve Schwannoma.

Cervical Vestibular Evoked Myogenic Potentials and Hypoglossal Nerve Schwannoma.

J Am Acad Audiol. 2018 Feb;29(2):94

Authors: McCaslin DL

PMID: 29401056 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nNa8OJ
via IFTTT

Cervical Vestibular Evoked Myogenic Potential in Hypoglossal Nerve Schwannoma: A Case Report.

Cervical Vestibular Evoked Myogenic Potential in Hypoglossal Nerve Schwannoma: A Case Report.

J Am Acad Audiol. 2018 Feb;29(2):187-191

Authors: Rajasekaran AK, Savardekar AR, Shivashankar NR

Abstract
BACKGROUND: Schwannoma of the hypoglossal nerve is rare. This case report documents an atypical abnormality of the cervical vestibular evoked myogenic potential (cVEMP) in a patient with schwannoma of the hypoglossal nerve. The observed abnormality was attributed to the proximity of the hypoglossal nerve to the spinal accessory nerve in the medullary cistern and base of the skull.
PURPOSE: To report cVEMP abnormality in a patient with hypoglossal nerve schwannoma and provide an anatomical correlation for this abnormality.
RESEARCH DESIGN: Case report.
STUDY SAMPLE: A 44-yr-old woman.
DATA COLLECTION: Pure-tone and speech audiometry, tympanometry, acoustic stapedial reflex, auditory brainstem response, and cVEMP testing were performed.
RESULTS: The audiological test results were normal except for the absence of cVEMP on the lesion side (right).
CONCLUSIONS: A cVEMP abnormality indicating a compromised spinal accessory nerve was observed in a patient with hypoglossal nerve schwannoma. This case report highlights the importance of recording cVEMP in relevant neurological conditions and provides clinical proof for the involvement of the spinal accessory nerve in the vestibulocollic reflex pathway.

PMID: 29401065 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nLUqDt
via IFTTT

Higher Asymmetry Ratio and Refixation Saccades in Individuals with Motion Sickness.

Higher Asymmetry Ratio and Refixation Saccades in Individuals with Motion Sickness.

J Am Acad Audiol. 2018 Feb;29(2):175-186

Authors: Neupane AK, Gururaj K, Sinha SK

Abstract
BACKGROUND: Motion sickness is a complex autonomic phenomenon caused by the intersensory conflict among the balancing systems, resulting in a mismatch of signals between static physical conditions of the susceptible individual exposed to dynamic environment.
PURPOSE: The present study was done to assess the sacculocollic reflex pathway and six semicircular canals in individuals susceptible to motion sickness.
RESEARCH DESIGN: Standard group comparison was used.
STUDY SAMPLE: A total of 60 participants with an age range of 17-25 yr were included, where group I comprised 30 participants with motion sickness and group II comprised 30 participants without motion sickness. The Motion Sickness Susceptibility Questionnaire-Short was administered to classify the participants into groups with or without motion sickness.
DATA COLLECTION AND ANALYSIS: The cervical vestibular-evoked myogenic potential (cVEMP) test and video head impulse test (vHIT) were administered to all participants. The Shapiro-Wilk test revealed normal distribution of the data (p > 0.05). Hence a parametric independent sample t test was done to check significant difference in cVEMP and vHIT parameters between the two groups.
RESULTS: The present study revealed no significant difference for cVEMP latencies and amplitude in individuals with motion sickness. However, significantly higher cVEMP asymmetry ratio was observed in individuals with motion sickness. Though the vestibulo-ocular reflex (VOR) gain values showed no significant difference between the two groups except for the right anterior left posterior plane, the asymmetry in VOR gain values revealed significant difference between the groups, suggesting asymmetry as a better parameter than absolute VOR gain values. Also, the presence of refixation saccades in 100% of the individuals with motion sickness accorded with various studies reported earlier with vestibular-related pathologies.
CONCLUSIONS: Presence of higher asymmetry ratio in cVEMP and vHIT test results plus refixation saccades to stabilize the gaze in vHIT can suggest some amount of vestibular anomalies in individuals with motion sickness.

PMID: 29401064 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E4R3Cl
via IFTTT

Test-Retest Reliability of Dual-Recorded Brainstem versus Cortical Auditory-Evoked Potentials to Speech.

Test-Retest Reliability of Dual-Recorded Brainstem versus Cortical Auditory-Evoked Potentials to Speech.

J Am Acad Audiol. 2018 Feb;29(2):164-174

Authors: Bidelman GM, Pousson M, Dugas C, Fehrenbach A

Abstract
BACKGROUND: Auditory-evoked potentials have proven useful in the objective evaluation of sound encoding at different stages of the auditory pathway (brainstem and cortex). Yet, their utility for use in clinical assessment and empirical research relies critically on the precision and test-retest repeatability of the measure.
PURPOSE: To determine how subcortical/cortical classes of auditory neural responses directly compare in terms of their internal consistency and test-retest reliability within and between listeners.
RESEARCH DESIGN: A descriptive cohort study describing the dispersion of electrophysiological measures.
STUDY SAMPLE: Eight young, normal-hearing female listeners.
DATA COLLECTION AND ANALYSIS: We recorded auditory brainstem responses (ABRs), brainstem frequency-following responses (FFRs), and cortical (P1-N1-P2) auditory-evoked potentials elicited by speech sounds in the same set of listeners. We reassessed responses within each of four different test sessions over a period of 1 mo, allowing us to detect possible changes in latency/amplitude characteristics with finer detail than in previous studies.
RESULTS: Our findings show that brainstem and cortical amplitude/latency measures are remarkably stable; with the exception of slight prolongation of the P1 wave, we found no significant variation in any response measure. Intraclass correlation analysis revealed that the speech-evoked FFR amplitude and latency measures achieved superior repeatability (intraclass correlation coefficient >0.85) among the more widely used obligatory brainstem (ABR) and cortical (P1-N1-P2) auditory-evoked potentials. Contrasting these intersubject effects, intrasubject variability (i.e., within-subject coefficient of variation) revealed that while latencies were more stable than amplitudes, brainstem and cortical responses did not differ in their variability at the single subject level.
CONCLUSIONS: We conclude that (1) the variability of auditory neural responses increases with ascending level along the auditory neuroaxis (cortex > brainstem) between subjects but remains highly stable within subjects and (2) speech-FFRs might provide a more stable measure of auditory function than other conventional responses (e.g., click-ABR), given their lower inter- and intrasubject variability.

PMID: 29401063 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2BGp3DJ
via IFTTT

The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

J Am Acad Audiol. 2018 Feb;29(2):151-163

Authors: Cameron S, Chong-White N, Mealings K, Beechey T, Dillon H, Young T

Abstract
BACKGROUND: Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal amplitude is essential to parse an incoming speech signal into smaller phonological units.
PURPOSE: The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an amplitude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard.
RESEARCH DESIGN: Development of the ParSE stimuli and test protocols, and collection of normative and test-retest reliability data.
STUDY SAMPLE: Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females.
DATA COLLECTION AND ANALYSIS: Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to individual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance.
RESULTS: Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was significant (p < 0.00001). The UBUR ranged from 50% modulation depth for 6-yr-olds to 25% for adults. Children aged 6-10 had significantly higher uncertainty region boundaries than adults. A skewed distribution toward negative performance occurred (p = 0.00007). There was no significant difference in performance on the ParSE between males and females (p = 0.60). Test-retest z scores were strongly correlated (r = 0.68, p < 0.0000001).
CONCLUSIONS: The ParSE normative data show that the ability to identify syllable boundaries based on changes in amplitude modulation improves with age, and that some children in the general population have performance much worse than their age peers. The test is suitable for use in planned studies in a clinical population.

PMID: 29401062 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nNn3QB
via IFTTT

The Phoneme Identification Test for Assessment of Spectral and Temporal Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

The Phoneme Identification Test for Assessment of Spectral and Temporal Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

J Am Acad Audiol. 2018 Feb;29(2):135-150

Authors: Cameron S, Chong-White N, Mealings K, Beechey T, Dillon H, Young T

Abstract
BACKGROUND: Previous research suggests that a proportion of children experiencing reading and listening difficulties may have an underlying primary deficit in the way that the central auditory nervous system analyses the perceptually important, rapidly varying, formant frequency components of speech.
PURPOSE: The Phoneme Identification Test (PIT) was developed to investigate the ability of children to use spectro-temporal cues to perceptually categorize speech sounds based on their rapidly changing formant frequencies. The PIT uses an adaptive two-alternative forced-choice procedure whereby the participant identifies a synthesized consonant-vowel (CV) (/ba/ or /da/) syllable. CV syllables differed only in the second formant (F2) frequency along an 11-step continuum (between 0% and 100%-representing an ideal /ba/ and /da/, respectively). The CV syllables were presented in either quiet (PIT Q) or noise at a 0 dB signal-to-noise ratio (PIT N).
RESEARCH DESIGN: Development of the PIT stimuli and test protocols, and collection of normative and test-retest reliability data.
STUDY SAMPLE: Twelve adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 5 mo) and 137 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 76 females.
DATA COLLECTION AND ANALYSIS: Data were collected using a touchscreen computer. Psychometric functions were automatically fit to individual data by the PIT software. Performance was determined by the width of the continuum for which responses were neither clearly /ba/ nor /da/ (referred to as the uncertainty region [UR]). A shallower psychometric function slope reflected greater uncertainty. Age effects were determined based on raw scores. Z scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance.
RESULTS: Across participants, the median value of the F2 range that resulted in uncertain responses was 33% in quiet and 40% in noise. There was a significant effect of age on the width of this UR (p < 0.00001) in both quiet and noise, with performance becoming adult like by age 9 on the PIT Q and age 10 on the PIT N. A skewed distribution toward negative performance occurred in both quiet (p = 0.01) and noise (p = 0.006). Median UR scores were significantly wider in noise than in quiet (T = 2041, p < 0.0000001). Performance (z scores) across the two tests was significantly correlated (r = 0.36, p = 0.000009). Test-retest z scores were significantly correlated in both quiet and noise (r = 0.4 and 0.37, respectively, p < 0.0001).
CONCLUSIONS: The PIT normative data show that the ability to identify phonemes based on changes in formant transitions improves with age, and that some children in the general population have performance much worse than their age peers. In children, uncertainty increases when the stimuli are presented in noise. The test is suitable for use in planned studies in a clinical population.

PMID: 29401061 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E4Envd
via IFTTT

Exponential Modeling of Frequency-Following Responses in American Neonates and Adults.

Exponential Modeling of Frequency-Following Responses in American Neonates and Adults.

J Am Acad Audiol. 2018 Feb;29(2):125-134

Authors: Jeng FC, Nance B, Montgomery-Reagan K, Lin CD

Abstract
BACKGROUND: The scalp-recorded frequency-following response (FFR) has been widely accepted in assessing the brain's processing of speech stimuli for people who speak tonal and nontonal languages. Characteristics of scalp-recorded FFRs with increasing number of sweeps have been delineated through the use of an exponential curve-fitting model in Chinese adults; however, characteristics of speech processing for people who speak a nontonal language remain unclear.
PURPOSE: This study had two specific aims. The first was to examine the characteristics of speech processing in neonates and adults who speak a nontonal language, to evaluate the goodness of fit of an exponential model on neonatal and adult FFRs, and to determine the differences, if any, between the two groups of participants. The second aim was to assess effective recording parameters for American neonates and adults.
RESEARCH DESIGN: This investigation employed a prospective between-subject study design.
STUDY SAMPLE: A total of 12 American neonates (1-3 days old) and 12 American adults (24.1 ± 2.5 yr old) were recruited. Each neonate passed an automated hearing screening at birth and all adult participants had normal hearing and were native English speakers.
DATA COLLECTION AND ANALYSIS: The English vowel /i/ with a rising pitch contour (117-166 Hz) was used to elicit the FFR. A total of 8,000 accepted sweeps were recorded from each participant. Three objective indices (Frequency Error, Tracking Accuracy, and Pitch Strength) were computed to estimate the frequency-tracking acuity and neural phase-locking magnitude when progressively more sweeps were included in the averaged waveform. For each objective index, the FFR trends were fit to an exponential curve-fitting model that included estimates of asymptotic amplitude, noise amplitude, and a time constant.
RESULTS: Significant differences were observed between groups for Frequency Error, Tracking Accuracy, and Pitch Strength of the FFR trends. The adult participants had significantly smaller Frequency Error (p < 0.001), better Tracking Accuracy (p = 0.001), and larger Pitch Strength (p = 0.003) values than the neonate participants. The adult participants also demonstrated a faster rate of improvement (i.e., a smaller time constant) in all three objective indices compared to the neonate participants. The smaller time constants observed in adults indicate that a larger number of sweeps will be needed to adequately assess the FFR for neonates. Furthermore, the exponential curve-fitting model provided a good fit to the FFR trends with increasing number of sweeps for American neonates (mean r2 = 0.89) and adults (mean r2 = 0.96).
CONCLUSIONS: Significant differences were noted between the neonatal and adult participants for Frequency Error, Tracking Accuracy, and Pitch Strength. These differences have important clinical implications in determining when to stop a recording and the number of sweeps needed to adequately assess the frequency-encoding acuity and neural phase-locking magnitude in neonates and adults. These findings lay an important foundation for establishing a normative database for American neonates and adults, and may prove to be useful in the development of diagnostic and therapeutic paradigms for neonates and adults who speak a nontonal language.

PMID: 29401060 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nNa9Ch
via IFTTT

Survey of Current Practice in the Fitting and Fine-Tuning of Common Signal-Processing Features in Hearing Aids for Adults.

Survey of Current Practice in the Fitting and Fine-Tuning of Common Signal-Processing Features in Hearing Aids for Adults.

J Am Acad Audiol. 2018 Feb;29(2):118-124

Authors: Anderson MC, Arehart KH, Souza PE

Abstract
BACKGROUND: Current guidelines for adult hearing aid fittings recommend the use of a prescriptive fitting rationale with real-ear verification that considers the audiogram for the determination of frequency-specific gain and ratios for wide dynamic range compression. However, the guidelines lack recommendations for how other common signal-processing features (e.g., noise reduction, frequency lowering, directional microphones) should be considered during the provision of hearing aid fittings and fine-tunings for adult patients.
PURPOSE: The purpose of this survey was to identify how audiologists make clinical decisions regarding common signal-processing features for hearing aid provision in adults.
RESEARCH DESIGN: An online survey was sent to audiologists across the United States. The 22 survey questions addressed four primary topics including demographics of the responding audiologists, factors affecting selection of hearing aid devices, the approaches used in the fitting of signal-processing features, and the strategies used in the fine-tuning of these features.
STUDY SAMPLE: A total of 251 audiologists who provide hearing aid fittings to adults completed the electronically distributed survey. The respondents worked in a variety of settings including private practice, physician offices, university clinics, and hospitals/medical centers.
DATA COLLECTION AND ANALYSIS: Data analysis was based on a qualitative analysis of the question responses. The survey results for each of the four topic areas (demographics, device selection, hearing aid fitting, and hearing aid fine-tuning) are summarized descriptively.
RESULTS: Survey responses indicate that audiologists vary in the procedures they use in fitting and fine-tuning based on the specific feature, such that the approaches used for the fitting of frequency-specific gain differ from other types of features (i.e., compression time constants, frequency lowering parameters, noise reduction strength, directional microphones, feedback management). Audiologists commonly rely on prescriptive fitting formulas and probe microphone measures for the fitting of frequency-specific gain and rely on manufacturers' default settings and recommendations for both the initial fitting and the fine-tuning of signal-processing features other than frequency-specific gain.
CONCLUSIONS: The survey results are consistent with a lack of published protocols and guidelines for fitting and adjusting signal-processing features beyond frequency-specific gain. To streamline current practice, a transparent evidence-based tool that enables clinicians to prescribe the setting of other features from individual patient characteristics would be desirable.

PMID: 29401059 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E6pqci
via IFTTT

Predictive Accuracy of Sweep Frequency Impedance Technology in Identifying Conductive Conditions in Newborns.

Predictive Accuracy of Sweep Frequency Impedance Technology in Identifying Conductive Conditions in Newborns.

J Am Acad Audiol. 2018 Feb;29(2):106-117

Authors: Aithal V, Kei J, Driscoll C, Murakoshi M, Wada H

Abstract
BACKGROUND: Diagnosing conductive conditions in newborns is challenging for both audiologists and otolaryngologists. Although high-frequency tympanometry (HFT), acoustic stapedial reflex tests, and wideband absorbance measures are useful diagnostic tools, there is performance measure variability in their detection of middle ear conditions. Additional diagnostic sensitivity and specificity measures gained through new technology such as sweep frequency impedance (SFI) measures may assist in the diagnosis of middle ear dysfunction in newborns.
PURPOSE: The purpose of this study was to determine the test performance of SFI to predict the status of the outer and middle ear in newborns against commonly used reference standards.
RESEARCH DESIGN: Automated auditory brainstem response (AABR), HFT (1000 Hz), transient evoked otoacoustic emission (TEOAE), distortion product otoacoustic emission (DPOAE), and SFI tests were administered to the study sample.
STUDY SAMPLE: A total of 188 neonates (98 males and 90 females) with a mean gestational age of 39.4 weeks were included in the sample. Mean age at the time of testing was 44.4 hr.
DATA COLLECTION AND ANALYSIS: Diagnostic accuracy of SFI was assessed in terms of its ability to identify conductive conditions in neonates when compared with nine different reference standards (including four single tests [AABR, HFT, TEOAE, and DPOAE] and five test batteries [HFT + DPOAE, HFT + TEOAE, DPOAE + TEOAE, DPOAE + AABR, and TEOAE + AABR]), using receiver operating characteristic (ROC) analysis and traditional test performance measures such as sensitivity and specificity.
RESULTS: The test performance of SFI against the test battery reference standard of HFT + DPOAE and single reference standard of HFT was high with an area under the ROC curve (AROC) of 0.87 and 0.82, respectively. Although the HFT + DPOAE test battery reference standard performed better than the HFT reference standard in predicting middle ear conductive conditions in neonates, the difference in AROC was not significant. Further analysis revealed that the highest sensitivity and specificity for SFI (86% and 88%, respectively) was obtained when compared with the reference standard of HFT + DPOAE. Among the four single reference standards, SFI had the highest sensitivity and specificity (76% and 88%, respectively) when compared against the HFT reference standard.
CONCLUSIONS: The high test performance of SFI against the HFT and HFT + DPOAE reference standards indicates that the SFI measure has appropriate diagnostic accuracy in detection of conductive conditions in newborns. Hence, the SFI test could be used as adjunct tool to identify conductive conditions in universal newborn hearing screening programs, and can also be used in diagnostic follow-up assessments.

PMID: 29401058 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nPR6a6
via IFTTT

Effects of Early- and Late-Arriving Room Reflections on the Speech-Evoked Auditory Brainstem Response.

Effects of Early- and Late-Arriving Room Reflections on the Speech-Evoked Auditory Brainstem Response.

J Am Acad Audiol. 2018 Feb;29(2):95-105

Authors: Al Osman R, Giguère C, Dajani HR

Abstract
BACKGROUND: Room reverberation alters the acoustical properties of the speech signals reaching our ears, affecting speech understanding. Therefore, it is important to understand the consequences of reverberation on auditory processing. In perceptual studies, the direct sound and early reflections of reverberated speech have been found to constitute useful energy, whereas the late reflections constitute detrimental energy.
PURPOSE: This study investigated how various components (direct sound versus early reflections versus late reflections) of the reverberated speech are encoded in the auditory system using the speech-evoked auditory brainstem response (ABR).
RESEARCH DESIGN: Speech-evoked ABRs were recorded using reverberant stimuli created as a result of the convolution between an ongoing synthetic vowel /a/ and each of the following room impulse response (RIR) components: direct sound, early reflections, late reflections, and full reverberation. Four stimuli were produced: direct component, early component, late component, and full component.
STUDY SAMPLE: Twelve participants with normal hearing participated in this study.
DATA COLLECTION AND ANALYSIS: Waves V and A amplitudes and latencies as well as envelope-following response (EFR) and fine structure frequency-following response (FFR) amplitudes of the speech-evoked ABR were evaluated separately with one-way repeated measures analysis of variances to determine the effect of stimulus. Post hoc comparisons using Tukey's honestly significant difference test were performed to assess significant differences between pairs of stimulus conditions.
RESULTS: For waves V and A amplitudes, a significant difference or trend toward significance was found between direct and late components, between direct and full components, and between early and late components. For waves V and A latencies, significant differences were found between direct and late components, between direct and full components, between early and late components, and between early and full components. For the EFR and FFR amplitudes, a significant difference or trend toward significance was found between direct and late components, and between early and late components. Moreover, eight, three, and one participant reported the early, full, and late stimuli, respectively, to be the most perceptually similar to the direct stimulus.
CONCLUSIONS: The stimuli that are acoustically most similar (direct and early) result in electrophysiological responses that are not significantly different, whereas the stimuli that are acoustically most different (direct and late, early and late) result in responses that are significantly different across all response measures. These findings provide insights toward the understanding of the effects of the different components of the RIRs on auditory processing of speech.

PMID: 29401057 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2E4QUyN
via IFTTT

Cervical Vestibular Evoked Myogenic Potentials and Hypoglossal Nerve Schwannoma.

Cervical Vestibular Evoked Myogenic Potentials and Hypoglossal Nerve Schwannoma.

J Am Acad Audiol. 2018 Feb;29(2):94

Authors: McCaslin DL

PMID: 29401056 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2nNa8OJ
via IFTTT

The influence of lower limb impairments on RaceRunning performance in athletes with hypertonia, ataxia or athetosis

S09666362.gif

Publication date: Available online 5 February 2018
Source:Gait & Posture
Author(s): Marietta L. van der Linden, Sadaf Jahed, Nicola Tennant, Martine H.G. Verheul
ObjectivesRaceRunning enables athletes with limited or no walking ability to propel themselves independently using a three-wheeled running bike that has a saddle and a chest plate for support but no pedals. For RaceRunning to be included as a para-athletics event, an evidence-based classification system is required. Therefore, the aim of this study was to assess the association between a range of impairment measures and RaceRunning performance.MethodsThe following impairment measures were recorded: lower limb muscle strength assessed using Manual Muscle Testing (MMT), selective voluntary motor control assessed using the Selective Control Assessment of the Lower Extremity (SCALE), spasticity recorded using both the Australian Spasticity Assessment Score (ASAS) and Modified Ashworth Scale (MAS), passive range of motion (ROM) of the lower extremities and the maximum static step length achieved on a stationary bike (MSSL). Associations between impairment measures and 100-meter race speed were assessed using Spearman’s correlation coefficients.ResultsSixteen male and fifteen female athletes (27 with cerebral palsy), aged 23 (SD = 7) years, Gross Motor Function Classification System ranging from II to V, participated. The MSSL averaged over both legs and the ASAS, MAS, SCALE, and MMT summed over all joints and both legs, significantly correlated with 100 m race performance (rho: 0.40-0.54). Passive knee extension was the only ROM measure that was significantly associated with race speed (rho = 0.48).ConclusionThese results suggest that lower limb spasticity, isometric leg strength, selective voluntary motor control and passive knee extension impact performance in RaceRunning athletes. This supports the potential use of these measures in a future evidence-based classification system.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2nHrzRh
via IFTTT