Πέμπτη 8 Νοεμβρίου 2018

Bilingualism leads to greater auditory capacity

Volume 57, Issue 11, November 2018, Page 831-837
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2AVNCuC
via IFTTT

Bilingualism leads to greater auditory capacity

Volume 57, Issue 11, November 2018, Page 831-837
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2AVNCuC
via IFTTT

Effect of Dual-Carrier Processing on the Intelligibility of Concurrent Vocoded Sentences

Purpose
The goal of this study was to examine the role of carrier cues in sound source segregation and the possibility to enhance the intelligibility of 2 sentences presented simultaneously. Dual-carrier (DC) processing (Apoux, Youngdahl, Yoho, & Healy, 2015) was used to introduce synthetic carrier cues in vocoded speech.
Method
Listeners with normal hearing heard sentences processed either with a DC or with a traditional single-carrier (SC) vocoder. One group was asked to repeat both sentences in a sentence pair (Experiment 1). The other group was asked to repeat only 1 sentence of the pair and was provided additional segregation cues involving onset asynchrony (Experiment 2).
Results
Both experiments showed that not only is the “target” sentence more intelligible in DC compared with SC, but the “background” sentence intelligibility is equally enhanced. The participants did not benefit from the additional segregation cues.
Conclusions
The data showed a clear benefit of using a distinct carrier to convey each sentence (i.e., DC processing). Accordingly, the poor speech intelligibility in noise typically observed with SC-vocoded speech may be partly attributed to the envelope of independent sound sources sharing the same carrier. Moreover, this work suggests that noise reduction may not be the only viable option to improve speech intelligibility in noise for users of cochlear implants. Alternative approaches aimed at enhancing sound source segregation such as DC processing may help to improve speech intelligibility while preserving and enhancing the background.

from #Audiology via ola Kala on Inoreader https://ift.tt/2PlqNbD
via IFTTT

Minimally Detectable Change and Minimal Clinically Important Difference of a Decline in Sentence Intelligibility and Speaking Rate for Individuals With Amyotrophic Lateral Sclerosis

Purpose
The purpose of this study was to determine the minimally detectable change (MDC) and minimal clinically important difference (MCID) of a decline in speech sentence intelligibility and speaking rate for individuals with amyotrophic lateral sclerosis (ALS). We also examined how the MDC and MCID vary across severities of dysarthria.
Method
One-hundred forty-seven patients with ALS and 49 healthy control subjects were selected from a larger, longitudinal study of bulbar decline in ALS, resulting in a total of 650 observations. Intelligibility and speaking rate in words per minute (WPM) were calculated using the Sentence Intelligibility Test (Yorkston, Beukelman, & Hakel, 2007), and the ALS Functional Rating Scale–Revised (Cedarbaum et al., 1999) was administered to capture patient perception of motor impairment. The MDC at the 95% confidence level was estimated using the following formula: MDC95 = 1.96 × √2 × SEM. For estimation of the MCID, receiver operating characteristic curves were generated, and area under the curve and optimal thresholds to maximize sensitivity and specificity were calculated.
Results
The MDC for sentence intelligibility was 12.07%, and the MCID was 1.43%. The MDC for speaking rate was 36.57 WPM, and the MCID was 8.80 WPM. Both MDC and MCID estimates varied with severity of dysarthria.
Conclusions
The findings suggest that declines greater than 12% sentence intelligibility and 37 WPM are required to be outside measurement error and that these estimates vary widely across dysarthria severities. The MDC and MCID metrics used in this study to detect real and clinically relevant change should be estimated for other measures of speech outcomes in intervention research.

from #Audiology via ola Kala on Inoreader https://ift.tt/2CXiRa2
via IFTTT

Changing Developmental Trajectories of Toddlers With Autism Spectrum Disorder: Strategies for Bridging Research to Community Practice

Purpose
The need for community-viable, evidence-based intervention strategies for toddlers with autism spectrum disorder (ASD) is a national priority. The purpose of this research forum article is to identify gaps in intervention research and needs in community practice for toddlers with ASD, incorporate published findings from a randomized controlled trial (RCT) of the Early Social Interaction (ESI) model (Wetherby et al., 2014) to illustrate community-based intervention, report new findings on child active engagement from the ESI RCT, and offer solutions to bridge the research-to-community practice gap.
Method
Research findings were reviewed to identify gaps in the evidence base for toddlers with ASD. Published and new findings from the multisite ESI RCT compared the effects of two different ESI conditions for 82 toddlers with ASD to teach parents how to support active engagement in natural environments.
Results
The RCT of the ESI model was the only parent-implemented intervention that reported differential treatment effects on standardized measures of child outcomes, including social communication, developmental level, and adaptive behavior. A new measure of active engagement in the natural environment was found to be sensitive to change in 3 months for young toddlers with ASD and to predict outcomes on the standardized measures of child outcomes. Strategies for utilizing the Autism Navigator collection of web-based courses and tools using extensive video footage for families and professional development are offered for scaling up in community settings to change developmental trajectories of toddlers with ASD.
Conclusions
Current health care and education systems are challenged to provide intervention of adequate intensity for toddlers with ASD. The use of innovative technology can increase acceleration of access to evidence-based early intervention for toddlers with ASD that addresses health disparities, enables immediate response as soon as ASD is suspected, and rapidly bridges the research-to-practice gap.
Presentation Video
https://doi.org/10.23641/asha.7297817

from #Audiology via ola Kala on Inoreader https://ift.tt/2yZcZdu
via IFTTT

Introduction to the Research Symposium Forum

Purpose
The purpose of this introduction is to provide an overview of the articles contained within this research forum of JSLHR. Each of these articles is based upon presentations from the 2017 ASHA Research Symposium.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OzdgJf
via IFTTT

SMARTer Approach to Personalizing Intervention for Children With Autism Spectrum Disorder

Purpose
This review article introduces research methods for personalization of intervention. Our goals are to review evidence-based practices for improving social communication impairment in children with autism spectrum disorder generally and then how these practices can be systematized in ways that personalize intervention, especially for children who respond slowly to an initial evidence-based practice.
Method
The narrative reflects on the current status of modular and targeted interventions on social communication outcomes in the field of autism research. Questions are introduced regarding personalization of interventions that can be addressed through research methods. These research methods include adaptive treatment designs and the Sequential Multiple Assignment Randomized Trial. Examples of empirical studies using research designs are presented to answer questions of personalization.
Conclusion
Bridging the gap between research studies and clinical practice can be advanced by research that attempts to answer questions pertinent to the broad heterogeneity in children with autism spectrum disorder, their response to interventions, and the fact that a single intervention is not effective for all children.
Presentation Video
https://doi.org/10.23641/asha.7298021

from #Audiology via ola Kala on Inoreader https://ift.tt/2yUyWKX
via IFTTT

The Dimensionality of Oral Language in Kindergarten Spanish–English Dual Language Learners

Purpose
The purpose of this study was to examine the latent dimensionality of language in dual language learners (DLLs) who spoke Spanish as their native language and were learning English as their second language.
Method
Participants included 259 Spanish–English DLLs attending kindergarten. In the spring of their kindergarten year, children completed vocabulary, grammar, listening comprehension, and higher level language measures (comprehension monitoring and inferencing) in Spanish and English.
Results
Two models with similar fits best describe the data. The first was a bifactor model with a single general language factor “l,” plus 2 additional language factors, 1 for Spanish and 1 for English. The second model was a 4-factor model, 1 for English that included all English language measures and 3 additional factors that included Spanish vocabulary, Spanish grammar, and Spanish higher level language.
Conclusions
These results indicate that a general language ability may underlie development in both Spanish and English. In contrast to a unidimensional structure found for monolingual English-speaking kindergarteners, oral language appears to be multidimensional in Spanish–English DLL kindergarteners, but multidimensionality is reflected in Spanish, not English.

from #Audiology via ola Kala on Inoreader https://ift.tt/2zbIvVc
via IFTTT

Executive Function Skills in School-Age Children With Autism Spectrum Disorder: Association With Language Abilities

Purpose
This article reviews research on executive function (EF) skills in children with autism spectrum disorder (ASD) and the relation between EF and language abilities. The current study assessed EF using nonverbal tasks of inhibition, shifting, and updating of working memory (WM) in school-age children with ASD. It also evaluated the association between children's receptive and expressive language abilities and EF performance.
Method
In this study, we sought to address variables that have contributed to inconsistencies in this area of research—including task issues, group comparisons, and participant heterogeneity. EF abilities in children with ASD (n = 48) were compared to typically developing controls (n = 71) matched on age, as well as when statistically controlling for group differences in nonverbal cognition, socioeconomic status, and social communication abilities. Six nonverbal EF tasks were administered—2 each to evaluate inhibition, shifting, and WM. Language abilities were assessed via a standardized language measure. Language–EF associations were examined for the ASD group as a whole and subdivided by language status.
Results
Children with ASD exhibited significant deficits in all components of EF compared to age-mates and showed particular difficulty with shifting after accounting for group differences in nonverbal cognition. Controlling for social communication—a core deficit in ASD—eliminated group differences in EF performance. A modest association was observed between language (especially comprehension) and EF skills, with some evidence of different patterns between children on the autism spectrum with and without language impairment.
Conclusions
There is a need for future research to examine the direction of influence between EF and language. It would be beneficial for EF interventions with children with ASD to consider language outcomes and, conversely, to examine whether specific language training facilitates aspects of executive control in children on the autism spectrum.
Presentation Video
https://doi.org/10.23641/asha.7298144

from #Audiology via ola Kala on Inoreader https://ift.tt/2yWVr1O
via IFTTT

Spontaneous Otoacoustic Emissions Reveal an Efficient Auditory Efferent Network

Purpose
Understanding speech often involves processing input from multiple modalities. The availability of visual information may make auditory input less critical for comprehension. This study examines whether the auditory system is sensitive to the presence of complementary sources of input when exerting top-down control over the amplification of speech stimuli.
Method
Auditory gain in the cochlea was assessed by monitoring spontaneous otoacoustic emissions (SOAEs), which are by-products of the amplification process. SOAEs were recorded while 32 participants (23 women, nine men; M age = 21.13) identified speech sounds such as “ba” and “ga.” The speech sounds were presented either alone or with complementary visual input, as well as in quiet or with 6-talker babble.
Results
Analyses revealed that there was a greater reduction in the amplification of noisy auditory stimuli compared with quiet. This reduced amplification may aid in the perception of speech by improving the signal-to-noise ratio. Critically, there was a greater reduction in amplification when speech sounds were presented bimodally with visual information relative to when they were presented unimodally. This effect was evidenced by greater changes in SOAE levels from baseline to stimuli presentation in audiovisual trials relative to audio-only trials.
Conclusions
The results suggest that even the earliest stages of speech comprehension are modulated by top-down influences, resulting in changes to SOAEs depending on the presence of bimodal or unimodal input. Neural processes responsible for changes in cochlear function are sensitive to redundancy across auditory and visual input channels and coordinate activity to maximize efficiency in the auditory periphery.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Jfg39z
via IFTTT

Lexical Development in Young Children With Autism Spectrum Disorder (ASD): How ASD May Affect Intake From the Input

Purpose
Most children with autism spectrum disorder (ASD) have below-age lexical knowledge and lexical representation. Our goal is to examine ways in which difficulties with social communication and language processing that are often associated with ASD may constrain these children's abilities to learn new words and to explore whether minimizing the social communication and processing demands of the learning situation can lead to successful learning.
Method
In this narrative review of recent work on lexical development in ASD, we describe key findings on children's acquisition of nouns, pronouns, and verbs and outline our research program currently in progress aimed at further elucidating these issues.
Conclusion
Our review of studies that examine lexical development in children with ASD suggests that innovative intervention approaches that take into account both the social communication and processing demands of the learning situation may be particularly beneficial.

from #Audiology via ola Kala on Inoreader https://ift.tt/2z1mngW
via IFTTT

A Survey of Clinician Decision Making When Identifying Swallowing Impairments and Determining Treatment

Purpose
Speech-language pathologists (SLPs) are the primary providers of dysphagia management; however, this role has been criticized with assertions that SLPs are inadequately trained in swallowing physiology (Campbell-Taylor, 2008). To date, diagnostic acuity and treatment planning for swallowing impairments by practicing SLPs have not been examined. We conducted a survey to examine how clinician demographics and swallowing complexity influence decision making for swallowing impairments in videofluoroscopic images. Our goal was to determine whether SLPs' judgments of swallowing timing impairments align with impairment thresholds available in the research literature and whether or not there is agreement among SLPs regarding therapeutic recommendations.
Method
The survey included 3 videofluoroscopic swallows ranging in complexity (easy, moderate, and complex). Three hundred three practicing SLPs in dysphagia management participated in the survey in a web-based format (Qualtrics, 2005) with frame-by-frame viewing capabilities. SLPs' judgments of impairment were compared against impairment thresholds for swallowing timing measures based on 95% confidence intervals from healthy swallows reported in the literature.
Results
The primary impairment in swallowing physiology was identified 67% of the time for the easy swallow, 6% for the moderate swallow, and 6% for the complex swallow. On average, practicing clinicians mislabeled 8 or more swallowing events as impaired that were within the normal physiologic range compared with healthy normative data available in the literature. Agreement was higher among clinicians who report using frame-by-frame analysis 80% of the time. A range of 19–21 different treatments was recommended for each video, regardless of complexity.
Conclusions
Poor to modest agreement in swallowing impairment identification, frequent false positives, and wide variability in treatment planning recommendations suggest that additional research and training in healthy and disordered swallowing are needed to increase accurate dysphagia diagnosis and treatment among clinicians.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Pj4ljH
via IFTTT

Early Motor and Communicative Development in Infants With an Older Sibling With Autism Spectrum Disorder

Purpose
A recent approach to identifying early markers of risk for autism spectrum disorder (ASD) has been to study infants who have an older sibling with ASD. These infants are at heightened risk (HR) for ASD and for other developmental difficulties, and even those who do not receive an eventual ASD diagnosis manifest a high degree of variability in trajectories of development. The primary goal of this review is to summarize findings from research on early motor and communicative development in these HR infants.
Method
This review focuses on 2 lines of inquiry. The first assesses whether delays and atypicalities in early motor abilities and in the development of early communication provide an index of eventual ASD diagnosis. The second asks whether such delays also influence infants' interactions with objects and people in ways that exert far-reaching, cascading effects on development.
Results
HR infants who do and who do not receive a diagnosis of ASD vary widely in motor and communicative development. In addition, variation in infant motor and communicative development appears to have cascading effects on development, both on the emergence of behavior in other domains and on the broader learning environment.
Conclusions
Advances in communicative and language development are supported by advances in motor skill. When these advances are slowed and/or when new skills are not consolidated and remain challenging for the infant, the enhanced potential for exploration afforded by new abilities and the concomitant increase in opportunities for learning are reduced. Improving our understanding of communicative delays of the sort observed in ASD and developing effective intervention methods requires going beyond the individual to consider the constant, complex interplay between developing communicators and their environments.
Presentation Video
https://doi.org/10.23641/asha.7299308

from #Audiology via ola Kala on Inoreader https://ift.tt/2z1mfOu
via IFTTT

Measuring Articulation Rate: A Comparison of Two Methods

Purpose
Mean articulatory rate (MAR) is an alternative approach to measure articulation rate and is defined as the mean of 5 rate measures in minimally 10 to maximally 20 consecutive syllables in perceptually fluent speech without pauses. This study examined the validity of this approach.
Method
Reading and spontaneous speech samples were collected from 80 typically fluent adults ranging in age between 20 and 59 years. After orthographic transcription, all samples were subjected to an articulation rate analysis first using the prevailing “global” method, which takes into account the entire speech sample and involves manipulation of the speech sample, and then again applying the MAR method. Paired-samples t tests were conducted to compare global measurements to MAR measurements.
Results
For both spontaneous speech and reading, a strong correlation was found between the 2 methods. However, for both speech tasks, the paired-samples t tests revealed a significant difference with MAR values being higher than the global method values.
Conclusions
The MAR method is a valid method to measure articulation rate. However, it cannot be used interchangeably with the prevailing global method. Further standardization of the MAR method is needed before general clinical use can be suggested.

from #Audiology via ola Kala on Inoreader https://ift.tt/2qmSKC6
via IFTTT

The Shape Bias in Children With Autism Spectrum Disorder: Potential Sources of Individual Differences

Purpose
Children with autism spectrum disorder (ASD) demonstrate many mechanisms of lexical acquisition that support language in typical development; however, 1 notable exception is the shape bias. The bases of these children's difficulties with the shape bias are not well understood, and the current study explored potential sources of individual differences from the perspectives of both attentional and conceptual accounts of the shape bias.
Method
Shape bias performance from the dataset of Potrzeba, Fein, and Naigles (2015) was analyzed, including 33 children with typical development (M = 20 months; SD = 1.6), 15 children with ASD with high verbal abilities (M = 33 months; SD = 4.6), and 14 children with ASD with low verbal abilities (M = 33 months; SD = 6.6). Lexical predictors (shape-side noun percentage from the MacArthur–Bates Communicative Development Inventory; Fenson et al., 2007) and social-pragmatic predictors (joint attention duration during play sessions) were considered as predictors of subsequent shape bias performance.
Results
For children in the low verbal ASD group, initiation of joint attention (positively) and passive attention (negatively) predicted subsequent shape bias performance, controlling for initial language and developmental level. Proportion of child's known nouns with shape-defined properties correlated negatively with shape bias performance in the high verbal ASD group but did not reach significance in regression models.
Conclusions
These findings suggest that no single account sufficiently explains the observed individual differences in shape bias performance in children with ASD. Nonetheless, these findings break new ground in highlighting the role of social communicative interactions as integral to understanding specific language outcomes (i.e., the shape bias) in children with ASD, especially those with low verbal abilities, and point to new hypotheses concerning the linguistic content of these interactions.
Presentation Video
https://doi.org/10.23641/asha.7299581

from #Audiology via ola Kala on Inoreader https://ift.tt/2z1mc5g
via IFTTT

Time Course of the Second Morpheme Processing During Spoken Disyllabic Compound Word Recognition in Chinese

Purpose
This study aimed to investigate the time course of meaning activation of the 2nd morpheme processing of compound words during Chinese spoken word recognition using eye tracking technique with the printed-word paradigm.
Method
In the printed-word paradigm, participants were instructed to listen to a spoken target word (e.g., “大方”, /da4fang1/, generous) while presented with a visual display composed of 3 words: a morphemic competitor (e.g., “圆形”, /yuan2xing2/, circle), which was semantically related to the 2nd morpheme (e.g., “方”, /fang1/, square) of the spoken target word; a whole-word competitor (e.g., “吝啬”, /lin4se4/, stingy), which was semantically related to the spoken target word at the whole-word level; and a distractor, which was semantically related to neither the morpheme or the whole target word. Participants were asked to respond whether the spoken target word was on the visual display or not, and their eye movements were recorded.
Results
The logit mixed-model analysis showed both the morphemic competitor and the whole-word competitor effects. Both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the 2nd-morphemic competitor effect occurred at a relatively later time window (i.e., 1000–1500 ms) compared with the whole-word competitor effect (i.e., 200–1000 ms).
Conclusion
Findings in this study suggest that semantic information of both the 2nd morpheme and the whole word of a compound was activated in spoken word recognition and that the meaning activation of the 2nd morpheme followed the activation of the whole word.

from #Audiology via ola Kala on Inoreader https://ift.tt/2JffUD3
via IFTTT

Treating Speech Movement Hypokinesia in Parkinson's Disease: Does Movement Size Matter?

Purpose
This study evaluates the effects of a novel speech therapy program that uses a verbal cue and gamified augmented visual feedback regarding tongue movements to address articulatory hypokinesia during speech in individuals with Parkinson's disease (PD).
Method
Five participants with PD participated in an ABA single-subject design study. The treatment aimed to increase tongue movement size using a combination of a verbal cue and augmented visual feedback and was conducted in 10 45-min sessions over 5 weeks. The presence of visual feedback was manipulated during treatment. Articulatory working space of the tongue was the primary outcome measure and was examined during treatment and in cued and uncued sentences pre- and posttreatment. Changes in speech intelligibility in response to a verbal cue pre- and posttreatment were also examined.
Results
During treatment, 4/5 participants showed a beneficial effect of visual feedback on tongue articulatory working space. At the end of the treatment, they used larger tongue movements when cued, relative to their pretreatment performance. None of the participants, however, generalized the effect to the uncued sentences. Speech intelligibility of cued sentences was judged as superior posttreatment only in a single participant.
Conclusions
This study demonstrated that using an augmented visual feedback approach is beneficial, beyond a verbal cue alone, in addressing articulatory hypokinesia in individuals with PD. An optimal degree of articulatory expansion might, however, be required to elicit a speech intelligibility benefit.

from #Audiology via ola Kala on Inoreader https://ift.tt/2P7WoOK
via IFTTT

Human Voice as a Measure of Mental Load Level

Purpose
The aim of this study was to determine a reliable and efficient set of acoustic parameters of the human voice able to estimate individuals' mental load level. Implementing detection methods and real-time analysis of mental load is a major challenge for monitoring and enhancing human task performance, especially during high-risk activities (e.g., flying aircraft).
Method
The voices of 32 participants were recorded during a cognitive task featuring word list recall. The difficulty of the task was manipulated by varying the number of words in each list (i.e., between 1 and 7, corresponding to 7 mental load conditions). Evoked pupillary response, known to be a useful proxy of mental load, was recorded simultaneously with speech to attest variations in mental load level during the experimental task.
Results
Classic features (fundamental frequency, its standard deviation, number of periods) and original features (frequency modulation and short-term variation in digital amplitude length) of the acoustic signals were predictive of memory load condition. They varied significantly according to the number of words to recall, specifically beyond a threshold of 3–5 words to recall, that is, when memory performance started to decline.
Conclusions
Some acoustic parameters of the human voice could be an appropriate and efficient means for detecting mental load levels.

from #Audiology via ola Kala on Inoreader https://ift.tt/2SrKP2Z
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader https://ift.tt/2OzWBoL
via IFTTT

Effect of Dual-Carrier Processing on the Intelligibility of Concurrent Vocoded Sentences

Purpose
The goal of this study was to examine the role of carrier cues in sound source segregation and the possibility to enhance the intelligibility of 2 sentences presented simultaneously. Dual-carrier (DC) processing (Apoux, Youngdahl, Yoho, & Healy, 2015) was used to introduce synthetic carrier cues in vocoded speech.
Method
Listeners with normal hearing heard sentences processed either with a DC or with a traditional single-carrier (SC) vocoder. One group was asked to repeat both sentences in a sentence pair (Experiment 1). The other group was asked to repeat only 1 sentence of the pair and was provided additional segregation cues involving onset asynchrony (Experiment 2).
Results
Both experiments showed that not only is the “target” sentence more intelligible in DC compared with SC, but the “background” sentence intelligibility is equally enhanced. The participants did not benefit from the additional segregation cues.
Conclusions
The data showed a clear benefit of using a distinct carrier to convey each sentence (i.e., DC processing). Accordingly, the poor speech intelligibility in noise typically observed with SC-vocoded speech may be partly attributed to the envelope of independent sound sources sharing the same carrier. Moreover, this work suggests that noise reduction may not be the only viable option to improve speech intelligibility in noise for users of cochlear implants. Alternative approaches aimed at enhancing sound source segregation such as DC processing may help to improve speech intelligibility while preserving and enhancing the background.

from #Audiology via ola Kala on Inoreader https://ift.tt/2PlqNbD
via IFTTT

Minimally Detectable Change and Minimal Clinically Important Difference of a Decline in Sentence Intelligibility and Speaking Rate for Individuals With Amyotrophic Lateral Sclerosis

Purpose
The purpose of this study was to determine the minimally detectable change (MDC) and minimal clinically important difference (MCID) of a decline in speech sentence intelligibility and speaking rate for individuals with amyotrophic lateral sclerosis (ALS). We also examined how the MDC and MCID vary across severities of dysarthria.
Method
One-hundred forty-seven patients with ALS and 49 healthy control subjects were selected from a larger, longitudinal study of bulbar decline in ALS, resulting in a total of 650 observations. Intelligibility and speaking rate in words per minute (WPM) were calculated using the Sentence Intelligibility Test (Yorkston, Beukelman, & Hakel, 2007), and the ALS Functional Rating Scale–Revised (Cedarbaum et al., 1999) was administered to capture patient perception of motor impairment. The MDC at the 95% confidence level was estimated using the following formula: MDC95 = 1.96 × √2 × SEM. For estimation of the MCID, receiver operating characteristic curves were generated, and area under the curve and optimal thresholds to maximize sensitivity and specificity were calculated.
Results
The MDC for sentence intelligibility was 12.07%, and the MCID was 1.43%. The MDC for speaking rate was 36.57 WPM, and the MCID was 8.80 WPM. Both MDC and MCID estimates varied with severity of dysarthria.
Conclusions
The findings suggest that declines greater than 12% sentence intelligibility and 37 WPM are required to be outside measurement error and that these estimates vary widely across dysarthria severities. The MDC and MCID metrics used in this study to detect real and clinically relevant change should be estimated for other measures of speech outcomes in intervention research.

from #Audiology via ola Kala on Inoreader https://ift.tt/2CXiRa2
via IFTTT

Changing Developmental Trajectories of Toddlers With Autism Spectrum Disorder: Strategies for Bridging Research to Community Practice

Purpose
The need for community-viable, evidence-based intervention strategies for toddlers with autism spectrum disorder (ASD) is a national priority. The purpose of this research forum article is to identify gaps in intervention research and needs in community practice for toddlers with ASD, incorporate published findings from a randomized controlled trial (RCT) of the Early Social Interaction (ESI) model (Wetherby et al., 2014) to illustrate community-based intervention, report new findings on child active engagement from the ESI RCT, and offer solutions to bridge the research-to-community practice gap.
Method
Research findings were reviewed to identify gaps in the evidence base for toddlers with ASD. Published and new findings from the multisite ESI RCT compared the effects of two different ESI conditions for 82 toddlers with ASD to teach parents how to support active engagement in natural environments.
Results
The RCT of the ESI model was the only parent-implemented intervention that reported differential treatment effects on standardized measures of child outcomes, including social communication, developmental level, and adaptive behavior. A new measure of active engagement in the natural environment was found to be sensitive to change in 3 months for young toddlers with ASD and to predict outcomes on the standardized measures of child outcomes. Strategies for utilizing the Autism Navigator collection of web-based courses and tools using extensive video footage for families and professional development are offered for scaling up in community settings to change developmental trajectories of toddlers with ASD.
Conclusions
Current health care and education systems are challenged to provide intervention of adequate intensity for toddlers with ASD. The use of innovative technology can increase acceleration of access to evidence-based early intervention for toddlers with ASD that addresses health disparities, enables immediate response as soon as ASD is suspected, and rapidly bridges the research-to-practice gap.
Presentation Video
https://doi.org/10.23641/asha.7297817

from #Audiology via ola Kala on Inoreader https://ift.tt/2yZcZdu
via IFTTT

Introduction to the Research Symposium Forum

Purpose
The purpose of this introduction is to provide an overview of the articles contained within this research forum of JSLHR. Each of these articles is based upon presentations from the 2017 ASHA Research Symposium.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OzdgJf
via IFTTT

SMARTer Approach to Personalizing Intervention for Children With Autism Spectrum Disorder

Purpose
This review article introduces research methods for personalization of intervention. Our goals are to review evidence-based practices for improving social communication impairment in children with autism spectrum disorder generally and then how these practices can be systematized in ways that personalize intervention, especially for children who respond slowly to an initial evidence-based practice.
Method
The narrative reflects on the current status of modular and targeted interventions on social communication outcomes in the field of autism research. Questions are introduced regarding personalization of interventions that can be addressed through research methods. These research methods include adaptive treatment designs and the Sequential Multiple Assignment Randomized Trial. Examples of empirical studies using research designs are presented to answer questions of personalization.
Conclusion
Bridging the gap between research studies and clinical practice can be advanced by research that attempts to answer questions pertinent to the broad heterogeneity in children with autism spectrum disorder, their response to interventions, and the fact that a single intervention is not effective for all children.
Presentation Video
https://doi.org/10.23641/asha.7298021

from #Audiology via ola Kala on Inoreader https://ift.tt/2yUyWKX
via IFTTT

The Dimensionality of Oral Language in Kindergarten Spanish–English Dual Language Learners

Purpose
The purpose of this study was to examine the latent dimensionality of language in dual language learners (DLLs) who spoke Spanish as their native language and were learning English as their second language.
Method
Participants included 259 Spanish–English DLLs attending kindergarten. In the spring of their kindergarten year, children completed vocabulary, grammar, listening comprehension, and higher level language measures (comprehension monitoring and inferencing) in Spanish and English.
Results
Two models with similar fits best describe the data. The first was a bifactor model with a single general language factor “l,” plus 2 additional language factors, 1 for Spanish and 1 for English. The second model was a 4-factor model, 1 for English that included all English language measures and 3 additional factors that included Spanish vocabulary, Spanish grammar, and Spanish higher level language.
Conclusions
These results indicate that a general language ability may underlie development in both Spanish and English. In contrast to a unidimensional structure found for monolingual English-speaking kindergarteners, oral language appears to be multidimensional in Spanish–English DLL kindergarteners, but multidimensionality is reflected in Spanish, not English.

from #Audiology via ola Kala on Inoreader https://ift.tt/2zbIvVc
via IFTTT

Executive Function Skills in School-Age Children With Autism Spectrum Disorder: Association With Language Abilities

Purpose
This article reviews research on executive function (EF) skills in children with autism spectrum disorder (ASD) and the relation between EF and language abilities. The current study assessed EF using nonverbal tasks of inhibition, shifting, and updating of working memory (WM) in school-age children with ASD. It also evaluated the association between children's receptive and expressive language abilities and EF performance.
Method
In this study, we sought to address variables that have contributed to inconsistencies in this area of research—including task issues, group comparisons, and participant heterogeneity. EF abilities in children with ASD (n = 48) were compared to typically developing controls (n = 71) matched on age, as well as when statistically controlling for group differences in nonverbal cognition, socioeconomic status, and social communication abilities. Six nonverbal EF tasks were administered—2 each to evaluate inhibition, shifting, and WM. Language abilities were assessed via a standardized language measure. Language–EF associations were examined for the ASD group as a whole and subdivided by language status.
Results
Children with ASD exhibited significant deficits in all components of EF compared to age-mates and showed particular difficulty with shifting after accounting for group differences in nonverbal cognition. Controlling for social communication—a core deficit in ASD—eliminated group differences in EF performance. A modest association was observed between language (especially comprehension) and EF skills, with some evidence of different patterns between children on the autism spectrum with and without language impairment.
Conclusions
There is a need for future research to examine the direction of influence between EF and language. It would be beneficial for EF interventions with children with ASD to consider language outcomes and, conversely, to examine whether specific language training facilitates aspects of executive control in children on the autism spectrum.
Presentation Video
https://doi.org/10.23641/asha.7298144

from #Audiology via ola Kala on Inoreader https://ift.tt/2yWVr1O
via IFTTT

Spontaneous Otoacoustic Emissions Reveal an Efficient Auditory Efferent Network

Purpose
Understanding speech often involves processing input from multiple modalities. The availability of visual information may make auditory input less critical for comprehension. This study examines whether the auditory system is sensitive to the presence of complementary sources of input when exerting top-down control over the amplification of speech stimuli.
Method
Auditory gain in the cochlea was assessed by monitoring spontaneous otoacoustic emissions (SOAEs), which are by-products of the amplification process. SOAEs were recorded while 32 participants (23 women, nine men; M age = 21.13) identified speech sounds such as “ba” and “ga.” The speech sounds were presented either alone or with complementary visual input, as well as in quiet or with 6-talker babble.
Results
Analyses revealed that there was a greater reduction in the amplification of noisy auditory stimuli compared with quiet. This reduced amplification may aid in the perception of speech by improving the signal-to-noise ratio. Critically, there was a greater reduction in amplification when speech sounds were presented bimodally with visual information relative to when they were presented unimodally. This effect was evidenced by greater changes in SOAE levels from baseline to stimuli presentation in audiovisual trials relative to audio-only trials.
Conclusions
The results suggest that even the earliest stages of speech comprehension are modulated by top-down influences, resulting in changes to SOAEs depending on the presence of bimodal or unimodal input. Neural processes responsible for changes in cochlear function are sensitive to redundancy across auditory and visual input channels and coordinate activity to maximize efficiency in the auditory periphery.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Jfg39z
via IFTTT

Lexical Development in Young Children With Autism Spectrum Disorder (ASD): How ASD May Affect Intake From the Input

Purpose
Most children with autism spectrum disorder (ASD) have below-age lexical knowledge and lexical representation. Our goal is to examine ways in which difficulties with social communication and language processing that are often associated with ASD may constrain these children's abilities to learn new words and to explore whether minimizing the social communication and processing demands of the learning situation can lead to successful learning.
Method
In this narrative review of recent work on lexical development in ASD, we describe key findings on children's acquisition of nouns, pronouns, and verbs and outline our research program currently in progress aimed at further elucidating these issues.
Conclusion
Our review of studies that examine lexical development in children with ASD suggests that innovative intervention approaches that take into account both the social communication and processing demands of the learning situation may be particularly beneficial.

from #Audiology via ola Kala on Inoreader https://ift.tt/2z1mngW
via IFTTT

A Survey of Clinician Decision Making When Identifying Swallowing Impairments and Determining Treatment

Purpose
Speech-language pathologists (SLPs) are the primary providers of dysphagia management; however, this role has been criticized with assertions that SLPs are inadequately trained in swallowing physiology (Campbell-Taylor, 2008). To date, diagnostic acuity and treatment planning for swallowing impairments by practicing SLPs have not been examined. We conducted a survey to examine how clinician demographics and swallowing complexity influence decision making for swallowing impairments in videofluoroscopic images. Our goal was to determine whether SLPs' judgments of swallowing timing impairments align with impairment thresholds available in the research literature and whether or not there is agreement among SLPs regarding therapeutic recommendations.
Method
The survey included 3 videofluoroscopic swallows ranging in complexity (easy, moderate, and complex). Three hundred three practicing SLPs in dysphagia management participated in the survey in a web-based format (Qualtrics, 2005) with frame-by-frame viewing capabilities. SLPs' judgments of impairment were compared against impairment thresholds for swallowing timing measures based on 95% confidence intervals from healthy swallows reported in the literature.
Results
The primary impairment in swallowing physiology was identified 67% of the time for the easy swallow, 6% for the moderate swallow, and 6% for the complex swallow. On average, practicing clinicians mislabeled 8 or more swallowing events as impaired that were within the normal physiologic range compared with healthy normative data available in the literature. Agreement was higher among clinicians who report using frame-by-frame analysis 80% of the time. A range of 19–21 different treatments was recommended for each video, regardless of complexity.
Conclusions
Poor to modest agreement in swallowing impairment identification, frequent false positives, and wide variability in treatment planning recommendations suggest that additional research and training in healthy and disordered swallowing are needed to increase accurate dysphagia diagnosis and treatment among clinicians.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Pj4ljH
via IFTTT

Early Motor and Communicative Development in Infants With an Older Sibling With Autism Spectrum Disorder

Purpose
A recent approach to identifying early markers of risk for autism spectrum disorder (ASD) has been to study infants who have an older sibling with ASD. These infants are at heightened risk (HR) for ASD and for other developmental difficulties, and even those who do not receive an eventual ASD diagnosis manifest a high degree of variability in trajectories of development. The primary goal of this review is to summarize findings from research on early motor and communicative development in these HR infants.
Method
This review focuses on 2 lines of inquiry. The first assesses whether delays and atypicalities in early motor abilities and in the development of early communication provide an index of eventual ASD diagnosis. The second asks whether such delays also influence infants' interactions with objects and people in ways that exert far-reaching, cascading effects on development.
Results
HR infants who do and who do not receive a diagnosis of ASD vary widely in motor and communicative development. In addition, variation in infant motor and communicative development appears to have cascading effects on development, both on the emergence of behavior in other domains and on the broader learning environment.
Conclusions
Advances in communicative and language development are supported by advances in motor skill. When these advances are slowed and/or when new skills are not consolidated and remain challenging for the infant, the enhanced potential for exploration afforded by new abilities and the concomitant increase in opportunities for learning are reduced. Improving our understanding of communicative delays of the sort observed in ASD and developing effective intervention methods requires going beyond the individual to consider the constant, complex interplay between developing communicators and their environments.
Presentation Video
https://doi.org/10.23641/asha.7299308

from #Audiology via ola Kala on Inoreader https://ift.tt/2z1mfOu
via IFTTT

Measuring Articulation Rate: A Comparison of Two Methods

Purpose
Mean articulatory rate (MAR) is an alternative approach to measure articulation rate and is defined as the mean of 5 rate measures in minimally 10 to maximally 20 consecutive syllables in perceptually fluent speech without pauses. This study examined the validity of this approach.
Method
Reading and spontaneous speech samples were collected from 80 typically fluent adults ranging in age between 20 and 59 years. After orthographic transcription, all samples were subjected to an articulation rate analysis first using the prevailing “global” method, which takes into account the entire speech sample and involves manipulation of the speech sample, and then again applying the MAR method. Paired-samples t tests were conducted to compare global measurements to MAR measurements.
Results
For both spontaneous speech and reading, a strong correlation was found between the 2 methods. However, for both speech tasks, the paired-samples t tests revealed a significant difference with MAR values being higher than the global method values.
Conclusions
The MAR method is a valid method to measure articulation rate. However, it cannot be used interchangeably with the prevailing global method. Further standardization of the MAR method is needed before general clinical use can be suggested.

from #Audiology via ola Kala on Inoreader https://ift.tt/2qmSKC6
via IFTTT

The Shape Bias in Children With Autism Spectrum Disorder: Potential Sources of Individual Differences

Purpose
Children with autism spectrum disorder (ASD) demonstrate many mechanisms of lexical acquisition that support language in typical development; however, 1 notable exception is the shape bias. The bases of these children's difficulties with the shape bias are not well understood, and the current study explored potential sources of individual differences from the perspectives of both attentional and conceptual accounts of the shape bias.
Method
Shape bias performance from the dataset of Potrzeba, Fein, and Naigles (2015) was analyzed, including 33 children with typical development (M = 20 months; SD = 1.6), 15 children with ASD with high verbal abilities (M = 33 months; SD = 4.6), and 14 children with ASD with low verbal abilities (M = 33 months; SD = 6.6). Lexical predictors (shape-side noun percentage from the MacArthur–Bates Communicative Development Inventory; Fenson et al., 2007) and social-pragmatic predictors (joint attention duration during play sessions) were considered as predictors of subsequent shape bias performance.
Results
For children in the low verbal ASD group, initiation of joint attention (positively) and passive attention (negatively) predicted subsequent shape bias performance, controlling for initial language and developmental level. Proportion of child's known nouns with shape-defined properties correlated negatively with shape bias performance in the high verbal ASD group but did not reach significance in regression models.
Conclusions
These findings suggest that no single account sufficiently explains the observed individual differences in shape bias performance in children with ASD. Nonetheless, these findings break new ground in highlighting the role of social communicative interactions as integral to understanding specific language outcomes (i.e., the shape bias) in children with ASD, especially those with low verbal abilities, and point to new hypotheses concerning the linguistic content of these interactions.
Presentation Video
https://doi.org/10.23641/asha.7299581

from #Audiology via ola Kala on Inoreader https://ift.tt/2z1mc5g
via IFTTT

Time Course of the Second Morpheme Processing During Spoken Disyllabic Compound Word Recognition in Chinese

Purpose
This study aimed to investigate the time course of meaning activation of the 2nd morpheme processing of compound words during Chinese spoken word recognition using eye tracking technique with the printed-word paradigm.
Method
In the printed-word paradigm, participants were instructed to listen to a spoken target word (e.g., “大方”, /da4fang1/, generous) while presented with a visual display composed of 3 words: a morphemic competitor (e.g., “圆形”, /yuan2xing2/, circle), which was semantically related to the 2nd morpheme (e.g., “方”, /fang1/, square) of the spoken target word; a whole-word competitor (e.g., “吝啬”, /lin4se4/, stingy), which was semantically related to the spoken target word at the whole-word level; and a distractor, which was semantically related to neither the morpheme or the whole target word. Participants were asked to respond whether the spoken target word was on the visual display or not, and their eye movements were recorded.
Results
The logit mixed-model analysis showed both the morphemic competitor and the whole-word competitor effects. Both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the 2nd-morphemic competitor effect occurred at a relatively later time window (i.e., 1000–1500 ms) compared with the whole-word competitor effect (i.e., 200–1000 ms).
Conclusion
Findings in this study suggest that semantic information of both the 2nd morpheme and the whole word of a compound was activated in spoken word recognition and that the meaning activation of the 2nd morpheme followed the activation of the whole word.

from #Audiology via ola Kala on Inoreader https://ift.tt/2JffUD3
via IFTTT

Treating Speech Movement Hypokinesia in Parkinson's Disease: Does Movement Size Matter?

Purpose
This study evaluates the effects of a novel speech therapy program that uses a verbal cue and gamified augmented visual feedback regarding tongue movements to address articulatory hypokinesia during speech in individuals with Parkinson's disease (PD).
Method
Five participants with PD participated in an ABA single-subject design study. The treatment aimed to increase tongue movement size using a combination of a verbal cue and augmented visual feedback and was conducted in 10 45-min sessions over 5 weeks. The presence of visual feedback was manipulated during treatment. Articulatory working space of the tongue was the primary outcome measure and was examined during treatment and in cued and uncued sentences pre- and posttreatment. Changes in speech intelligibility in response to a verbal cue pre- and posttreatment were also examined.
Results
During treatment, 4/5 participants showed a beneficial effect of visual feedback on tongue articulatory working space. At the end of the treatment, they used larger tongue movements when cued, relative to their pretreatment performance. None of the participants, however, generalized the effect to the uncued sentences. Speech intelligibility of cued sentences was judged as superior posttreatment only in a single participant.
Conclusions
This study demonstrated that using an augmented visual feedback approach is beneficial, beyond a verbal cue alone, in addressing articulatory hypokinesia in individuals with PD. An optimal degree of articulatory expansion might, however, be required to elicit a speech intelligibility benefit.

from #Audiology via ola Kala on Inoreader https://ift.tt/2P7WoOK
via IFTTT

Human Voice as a Measure of Mental Load Level

Purpose
The aim of this study was to determine a reliable and efficient set of acoustic parameters of the human voice able to estimate individuals' mental load level. Implementing detection methods and real-time analysis of mental load is a major challenge for monitoring and enhancing human task performance, especially during high-risk activities (e.g., flying aircraft).
Method
The voices of 32 participants were recorded during a cognitive task featuring word list recall. The difficulty of the task was manipulated by varying the number of words in each list (i.e., between 1 and 7, corresponding to 7 mental load conditions). Evoked pupillary response, known to be a useful proxy of mental load, was recorded simultaneously with speech to attest variations in mental load level during the experimental task.
Results
Classic features (fundamental frequency, its standard deviation, number of periods) and original features (frequency modulation and short-term variation in digital amplitude length) of the acoustic signals were predictive of memory load condition. They varied significantly according to the number of words to recall, specifically beyond a threshold of 3–5 words to recall, that is, when memory performance started to decline.
Conclusions
Some acoustic parameters of the human voice could be an appropriate and efficient means for detecting mental load levels.

from #Audiology via ola Kala on Inoreader https://ift.tt/2SrKP2Z
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader https://ift.tt/2OzWBoL
via IFTTT

Bilingualism leads to greater auditory capacity

Volume 57, Issue 11, November 2018, Page 831-837
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2RD3w2C
via IFTTT

Bilingualism leads to greater auditory capacity

Volume 57, Issue 11, November 2018, Page 831-837
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2RD3w2C
via IFTTT

Bilingualism leads to greater auditory capacity.

Bilingualism leads to greater auditory capacity.

Int J Audiol. 2018 Nov;57(11):831-837

Authors: Motlagh Zadeh L, Jalilvand Karimi L, Silbert NH

Abstract
The objective of this article is to investigate the effects of bilingualism on auditory capacity of young adults using a dichotic consonant-vowel (CV) test. Listeners were asked to identify distinct CVs dichotically presented to each ear through headphones. CV identification accuracy in both ears served as a measure of auditory capacity of listeners. Eighty normal hearing participants including 40 bilinguals (23 males and 17 females) and 40 monolinguals (11 males and 29 females) were used as study sample. Members of the bilingual group acquired their second language before entering elementary school. The bilingual listeners had higher mean both-ear-correct scores than did monolingual listeners, indicating a greater auditory capacity in the bilingual group than in the monolingual group. The finding of greater auditory capacity in bilinguals using a task requiring divided attention reflects greater ability to store and recall auditory information in bilinguals. However, the inconsistency of results across studies of bilingual advantages indicates that there is a need for further research in this area using both linguistic and non-linguistic tasks and considering age of acquisition as a possible moderating variable.

PMID: 30403921 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2OxMzoq
via IFTTT

Bilingualism leads to greater auditory capacity.

Bilingualism leads to greater auditory capacity.

Int J Audiol. 2018 Nov;57(11):831-837

Authors: Motlagh Zadeh L, Jalilvand Karimi L, Silbert NH

Abstract
The objective of this article is to investigate the effects of bilingualism on auditory capacity of young adults using a dichotic consonant-vowel (CV) test. Listeners were asked to identify distinct CVs dichotically presented to each ear through headphones. CV identification accuracy in both ears served as a measure of auditory capacity of listeners. Eighty normal hearing participants including 40 bilinguals (23 males and 17 females) and 40 monolinguals (11 males and 29 females) were used as study sample. Members of the bilingual group acquired their second language before entering elementary school. The bilingual listeners had higher mean both-ear-correct scores than did monolingual listeners, indicating a greater auditory capacity in the bilingual group than in the monolingual group. The finding of greater auditory capacity in bilinguals using a task requiring divided attention reflects greater ability to store and recall auditory information in bilinguals. However, the inconsistency of results across studies of bilingual advantages indicates that there is a need for further research in this area using both linguistic and non-linguistic tasks and considering age of acquisition as a possible moderating variable.

PMID: 30403921 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2OxMzoq
via IFTTT

Bilingualism leads to greater auditory capacity.

Bilingualism leads to greater auditory capacity.

Int J Audiol. 2018 Nov;57(11):831-837

Authors: Motlagh Zadeh L, Jalilvand Karimi L, Silbert NH

Abstract
The objective of this article is to investigate the effects of bilingualism on auditory capacity of young adults using a dichotic consonant-vowel (CV) test. Listeners were asked to identify distinct CVs dichotically presented to each ear through headphones. CV identification accuracy in both ears served as a measure of auditory capacity of listeners. Eighty normal hearing participants including 40 bilinguals (23 males and 17 females) and 40 monolinguals (11 males and 29 females) were used as study sample. Members of the bilingual group acquired their second language before entering elementary school. The bilingual listeners had higher mean both-ear-correct scores than did monolingual listeners, indicating a greater auditory capacity in the bilingual group than in the monolingual group. The finding of greater auditory capacity in bilinguals using a task requiring divided attention reflects greater ability to store and recall auditory information in bilinguals. However, the inconsistency of results across studies of bilingual advantages indicates that there is a need for further research in this area using both linguistic and non-linguistic tasks and considering age of acquisition as a possible moderating variable.

PMID: 30403921 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2OxMzoq
via IFTTT

Bilingualism leads to greater auditory capacity.

Bilingualism leads to greater auditory capacity.

Int J Audiol. 2018 Nov;57(11):831-837

Authors: Motlagh Zadeh L, Jalilvand Karimi L, Silbert NH

Abstract
The objective of this article is to investigate the effects of bilingualism on auditory capacity of young adults using a dichotic consonant-vowel (CV) test. Listeners were asked to identify distinct CVs dichotically presented to each ear through headphones. CV identification accuracy in both ears served as a measure of auditory capacity of listeners. Eighty normal hearing participants including 40 bilinguals (23 males and 17 females) and 40 monolinguals (11 males and 29 females) were used as study sample. Members of the bilingual group acquired their second language before entering elementary school. The bilingual listeners had higher mean both-ear-correct scores than did monolingual listeners, indicating a greater auditory capacity in the bilingual group than in the monolingual group. The finding of greater auditory capacity in bilinguals using a task requiring divided attention reflects greater ability to store and recall auditory information in bilinguals. However, the inconsistency of results across studies of bilingual advantages indicates that there is a need for further research in this area using both linguistic and non-linguistic tasks and considering age of acquisition as a possible moderating variable.

PMID: 30403921 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2OxMzoq
via IFTTT

Measurement of Thresholds Using Auditory Steady-State Response and Cochlear Microphonics in Children with Auditory Neuropathy.

Measurement of Thresholds Using Auditory Steady-State Response and Cochlear Microphonics in Children with Auditory Neuropathy.

J Am Acad Audiol. 2018 Nov 08;:

Authors: Lu P, Huang Y, Chen WX, Jiang W, Hua NY, Wang Y, Wang B, Xu ZM

Abstract
BACKGROUND: The detection of precise hearing thresholds in infants and children with auditory neuropathy (AN) is challenging with current objective methods, especially in those younger than six months of age.
PURPOSE: The aim of this study was to compare the thresholds using auditory steady-state response (ASSR) and cochlear microphonics (CM) in children with AN and children with normal hearing.
RESEARCH DESIGN: The thresholds of CM, ASSR, and visual reinforcement audiometry (VRA) tests were recorded; the ASSR and VRA frequencies used were 250, 500, 1000, 2000, and 4000 Hz.
STUDY SAMPLE: The participants in this study were 15 children with AN (27 ears) (1-7.6 years, median age 4.1 years) and ten children with normal hearing (20 ears) (1-8 years, median age four years).
DATA COLLECTION AND ANALYSIS: The thresholds of the three methods were compared, and histograms were used to represent frequency distributions of threshold differences obtained from the three methods.
RESULTS: In children with normal hearing, the average CM thresholds (84.5 dB) were significantly higher than the VRA thresholds (10.0-10.8 dB); in children with AN, both CM and VRA responses were seen at high signal levels (88.9 dB and 70.6-103.4 dB, respectively). In normal children, the difference between mean VRA and ASSR thresholds ranged from 17.5 to 30.3 dB, which was significantly smaller than the difference seen between the mean CM and VRA thresholds (71.5-72.3 dB). The correlation between VRA and ASSR in children with normal hearing ranged from 0.38 to 0.48, whereas no such correlation was seen in children with AN at any frequency (0.03-0.19).
CONCLUSIONS: Our results indicated that ASSR and CM were poor predictors of the conventional behavioral threshold in children with AN.

PMID: 30403957 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2QnQPZ3
via IFTTT

The Effects of Extended Input Dynamic Range on Laboratory and Field-Trial Evaluations in Adult Hearing Aid Users.

The Effects of Extended Input Dynamic Range on Laboratory and Field-Trial Evaluations in Adult Hearing Aid Users.

J Am Acad Audiol. 2018 Nov 08;:

Authors: Plyler PN, Easterday M, Behrens T

Abstract
BACKGROUND: Digital hearing aids using a 16-bit analog-to-digital converter (ADC) provide a 96-dB input dynamic range. The level at which the ADC peak clips and distorts input signals ranges between 95 and 105 dB SPL. Recent research evaluated the effect of extending the input dynamic range in a commercially available hearing aid. Although the results were promising, several limitations were noted by the authors. Laboratory testing was conducted using recordings from hearing aids set for a flat 50-dB loss; however, field testing was conducted with hearing aids fitted for their hearing loss. In addition, participants rarely encountered input levels of sufficient intensity to adequately test the feature and were unable to directly compare aids with and without extended input dynamic range (EIDR) under identical conditions.
PURPOSE: The effects of EIDR under realistic and repeatable test conditions both within and outside the laboratory setting were evaluated.
RESEARCH DESIGN: A repeated measures design was used. The experiment was single-blinded.
STUDY SAMPLE: Twenty adults (14 males and six females) between the ages of 30 and 71 years (average age 62 years) who were experienced hearing aid users participated.
DATA COLLECTION AND ANALYSIS: Each participant was fit with Oticon Opn hearing instruments binaurally using the National Acoustics Laboratory-Nonlinear 1 fitting strategy. Participants completed a two-week trial period using hearing aids with EIDR and a two-week trial period without EIDR. The initial EIDR condition trial period was counterbalanced. After each trial, laboratory evaluations were obtained at 85 dBC using the Connected Speech Test, the Hearing in Noise Test, and the acceptable noise level (ANL). Satisfaction ratings were conducted at 85 dBC using speech in quiet and in noise as well as music. Field-trial evaluations were obtained using the abbreviated profile of hearing aid benefit (APHAB). Satisfaction ratings were also conducted in the field at 85 dBC using speech and music. After the study, each participant indicated which trial period they preferred overall. Repeated measures analysis of variances were conducted to assess listener performance. Pairwise comparisons were then completed for significant main effects.
RESULTS: In the laboratory, results did not reveal significant differences between EIDR conditions on any speech perception in noise test or any satisfaction rating measurement. In the field, results did not reveal significant differences between the EIDR conditions on the APHAB or on any of the satisfaction rating measurements. Nine participants (45%) preferred the EIDR condition. Fifteen participants (75%) indicated that speech clarity was the most important factor in determining the overall preference. Sixteen participants (80%) preferred the EIDR condition that resulted in the lower ANL.
CONCLUSIONS: The use of EIDR in hearing aids within and outside the laboratory under realistic and repeatable test conditions did not positively or negatively impact performance or preference. Results disagreed with previous findings obtained in the laboratory that suggested EIDR improved performance; however, results agreed with previous findings obtained in the field. Future research may consider the effect of hearing aid experience, input level, and noise acceptance on potential benefit with EIDR.

PMID: 30403956 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PfdMRQ
via IFTTT

The Relationship between Severity of Hearing Loss and Subjective Tinnitus Loudness among Patients Seen in a Specialist Tinnitus and Hyperacusis Therapy Clinic in UK.

The Relationship between Severity of Hearing Loss and Subjective Tinnitus Loudness among Patients Seen in a Specialist Tinnitus and Hyperacusis Therapy Clinic in UK.

J Am Acad Audiol. 2018 Nov 08;:

Authors: Aazh H, Salvi R

Abstract
BACKGROUND: Hearing loss is often associated with the phantom sound of tinnitus. However, the degree of the association between severity of hearing loss and tinnitus loudness taking into account the impact of other variables (e.g., emotional disturbances) is not fully understood. This is an important question for audiologists who are specialized in tinnitus rehabilitation as patients often ask whether the loudness of their tinnitus will increase if their hearing gets worse.
PURPOSE: To explore the relationship between tinnitus loudness and pure tone hearing thresholds.
RESEARCH DESIGN: This was a retrospective cross-sectional study.
STUDY SAMPLE: 445 consecutive patients who attended a Tinnitus and Hyperacusis Therapy Specialist Clinic in UK were included.
DATA COLLECTION AND ANALYSIS: The results of audiological tests and self-report questionnaires were gathered retrospectively from the records of the patients. Multiple-regression analysis was used to assess the relationship between tinnitus loudness, hearing loss and other variables.
RESULTS: The regression model showed a significant relationship between the pure tone average (PTA) at the frequencies 0.25, 0.5, 1, 2, and 4 kHz of the better ear and the tinnitus loudness as measured via visual analogue scale (VAS), r (regression coefficient) = 0.022 (p < 0.001). Other variables significantly associated with tinnitus loudness were tinnitus annoyance (r = 0.49, p < 0.001) and the effect of tinnitus on life (r = 0.09, p = 0.006). The regression model explained 52% of the variance of tinnitus loudness.
CONCLUSIONS: Although increased tinnitus loudness was associated with worse PTA, the relationship was very weak. Tinnitus annoyance and impact of tinnitus on life were more strongly correlated with tinnitus loudness than PTA.

PMID: 30403955 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2QA2BzC
via IFTTT

Measurement of Thresholds Using Auditory Steady-State Response and Cochlear Microphonics in Children with Auditory Neuropathy.

Measurement of Thresholds Using Auditory Steady-State Response and Cochlear Microphonics in Children with Auditory Neuropathy.

J Am Acad Audiol. 2018 Nov 08;:

Authors: Lu P, Huang Y, Chen WX, Jiang W, Hua NY, Wang Y, Wang B, Xu ZM

Abstract
BACKGROUND: The detection of precise hearing thresholds in infants and children with auditory neuropathy (AN) is challenging with current objective methods, especially in those younger than six months of age.
PURPOSE: The aim of this study was to compare the thresholds using auditory steady-state response (ASSR) and cochlear microphonics (CM) in children with AN and children with normal hearing.
RESEARCH DESIGN: The thresholds of CM, ASSR, and visual reinforcement audiometry (VRA) tests were recorded; the ASSR and VRA frequencies used were 250, 500, 1000, 2000, and 4000 Hz.
STUDY SAMPLE: The participants in this study were 15 children with AN (27 ears) (1-7.6 years, median age 4.1 years) and ten children with normal hearing (20 ears) (1-8 years, median age four years).
DATA COLLECTION AND ANALYSIS: The thresholds of the three methods were compared, and histograms were used to represent frequency distributions of threshold differences obtained from the three methods.
RESULTS: In children with normal hearing, the average CM thresholds (84.5 dB) were significantly higher than the VRA thresholds (10.0-10.8 dB); in children with AN, both CM and VRA responses were seen at high signal levels (88.9 dB and 70.6-103.4 dB, respectively). In normal children, the difference between mean VRA and ASSR thresholds ranged from 17.5 to 30.3 dB, which was significantly smaller than the difference seen between the mean CM and VRA thresholds (71.5-72.3 dB). The correlation between VRA and ASSR in children with normal hearing ranged from 0.38 to 0.48, whereas no such correlation was seen in children with AN at any frequency (0.03-0.19).
CONCLUSIONS: Our results indicated that ASSR and CM were poor predictors of the conventional behavioral threshold in children with AN.

PMID: 30403957 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2QnQPZ3
via IFTTT

The Effects of Extended Input Dynamic Range on Laboratory and Field-Trial Evaluations in Adult Hearing Aid Users.

The Effects of Extended Input Dynamic Range on Laboratory and Field-Trial Evaluations in Adult Hearing Aid Users.

J Am Acad Audiol. 2018 Nov 08;:

Authors: Plyler PN, Easterday M, Behrens T

Abstract
BACKGROUND: Digital hearing aids using a 16-bit analog-to-digital converter (ADC) provide a 96-dB input dynamic range. The level at which the ADC peak clips and distorts input signals ranges between 95 and 105 dB SPL. Recent research evaluated the effect of extending the input dynamic range in a commercially available hearing aid. Although the results were promising, several limitations were noted by the authors. Laboratory testing was conducted using recordings from hearing aids set for a flat 50-dB loss; however, field testing was conducted with hearing aids fitted for their hearing loss. In addition, participants rarely encountered input levels of sufficient intensity to adequately test the feature and were unable to directly compare aids with and without extended input dynamic range (EIDR) under identical conditions.
PURPOSE: The effects of EIDR under realistic and repeatable test conditions both within and outside the laboratory setting were evaluated.
RESEARCH DESIGN: A repeated measures design was used. The experiment was single-blinded.
STUDY SAMPLE: Twenty adults (14 males and six females) between the ages of 30 and 71 years (average age 62 years) who were experienced hearing aid users participated.
DATA COLLECTION AND ANALYSIS: Each participant was fit with Oticon Opn hearing instruments binaurally using the National Acoustics Laboratory-Nonlinear 1 fitting strategy. Participants completed a two-week trial period using hearing aids with EIDR and a two-week trial period without EIDR. The initial EIDR condition trial period was counterbalanced. After each trial, laboratory evaluations were obtained at 85 dBC using the Connected Speech Test, the Hearing in Noise Test, and the acceptable noise level (ANL). Satisfaction ratings were conducted at 85 dBC using speech in quiet and in noise as well as music. Field-trial evaluations were obtained using the abbreviated profile of hearing aid benefit (APHAB). Satisfaction ratings were also conducted in the field at 85 dBC using speech and music. After the study, each participant indicated which trial period they preferred overall. Repeated measures analysis of variances were conducted to assess listener performance. Pairwise comparisons were then completed for significant main effects.
RESULTS: In the laboratory, results did not reveal significant differences between EIDR conditions on any speech perception in noise test or any satisfaction rating measurement. In the field, results did not reveal significant differences between the EIDR conditions on the APHAB or on any of the satisfaction rating measurements. Nine participants (45%) preferred the EIDR condition. Fifteen participants (75%) indicated that speech clarity was the most important factor in determining the overall preference. Sixteen participants (80%) preferred the EIDR condition that resulted in the lower ANL.
CONCLUSIONS: The use of EIDR in hearing aids within and outside the laboratory under realistic and repeatable test conditions did not positively or negatively impact performance or preference. Results disagreed with previous findings obtained in the laboratory that suggested EIDR improved performance; however, results agreed with previous findings obtained in the field. Future research may consider the effect of hearing aid experience, input level, and noise acceptance on potential benefit with EIDR.

PMID: 30403956 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PfdMRQ
via IFTTT

The Relationship between Severity of Hearing Loss and Subjective Tinnitus Loudness among Patients Seen in a Specialist Tinnitus and Hyperacusis Therapy Clinic in UK.

The Relationship between Severity of Hearing Loss and Subjective Tinnitus Loudness among Patients Seen in a Specialist Tinnitus and Hyperacusis Therapy Clinic in UK.

J Am Acad Audiol. 2018 Nov 08;:

Authors: Aazh H, Salvi R

Abstract
BACKGROUND: Hearing loss is often associated with the phantom sound of tinnitus. However, the degree of the association between severity of hearing loss and tinnitus loudness taking into account the impact of other variables (e.g., emotional disturbances) is not fully understood. This is an important question for audiologists who are specialized in tinnitus rehabilitation as patients often ask whether the loudness of their tinnitus will increase if their hearing gets worse.
PURPOSE: To explore the relationship between tinnitus loudness and pure tone hearing thresholds.
RESEARCH DESIGN: This was a retrospective cross-sectional study.
STUDY SAMPLE: 445 consecutive patients who attended a Tinnitus and Hyperacusis Therapy Specialist Clinic in UK were included.
DATA COLLECTION AND ANALYSIS: The results of audiological tests and self-report questionnaires were gathered retrospectively from the records of the patients. Multiple-regression analysis was used to assess the relationship between tinnitus loudness, hearing loss and other variables.
RESULTS: The regression model showed a significant relationship between the pure tone average (PTA) at the frequencies 0.25, 0.5, 1, 2, and 4 kHz of the better ear and the tinnitus loudness as measured via visual analogue scale (VAS), r (regression coefficient) = 0.022 (p < 0.001). Other variables significantly associated with tinnitus loudness were tinnitus annoyance (r = 0.49, p < 0.001) and the effect of tinnitus on life (r = 0.09, p = 0.006). The regression model explained 52% of the variance of tinnitus loudness.
CONCLUSIONS: Although increased tinnitus loudness was associated with worse PTA, the relationship was very weak. Tinnitus annoyance and impact of tinnitus on life were more strongly correlated with tinnitus loudness than PTA.

PMID: 30403955 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2QA2BzC
via IFTTT