Δευτέρα 10 Δεκεμβρίου 2018

The Effect of Memory Span and Manual Dexterity on Hearing Aid Handling Skills in New and Experienced Hearing Aid Users

Purpose
The aim of the study was to determine the effect of memory function and manual dexterity on new and experienced hearing aid users' abilities to use and care for their hearing aids.
Method
New and experienced hearing aid users were administered the Practical Hearing Aid Skills Test–Revised (PHAST-R; Doherty & Desjardins, 2012), a measure of a hearing aid user's ability to use and care for their hearing aids. The test was administered during their 30-day hearing aid check or yearly hearing evaluation appointment. Participants were also administered the Digit Span Test of memory function (Wechsler, 1997) and the Nine-Hole Peg Test of manual dexterity (Mathiowetz, Weber, Kashman, & Volland,1985).
Results
Participants with poorer memory span function performed significantly poorer on the PHAST-R than participants with better memory span. However, no significant relationship between manual dexterity and PHAST-R performance was observed. Experienced hearing aid users who were recently reoriented on how to use and care for their hearing aids performed significantly better on the PHAST-R compared to new hearing aid users and experienced hearing aid users who had not received a hearing aid orientation within the last year. Cleaning the hearing aid and telephone use were the 2 PHAST-R tasks that all hearing aid clients needed the most recounseling on.
Conclusion
Memory span is significantly related to an individual's ability to correctly use and care for their hearing aids regardless of whether they are new or experienced hearing aid users.

from #Audiology via ola Kala on Inoreader https://ift.tt/2PxWAlM
via IFTTT

Evaluation of a Protocol for Integrated Speech Audiometry

Purpose
This project was aimed at evaluating the reliability, validity, and clinical utility of a protocol for integrated measurements of the most comfortable level (MCL) and uncomfortable level (UCL) for speech, in combination with the speech recognition threshold (SRT). We also evaluated the validity of using spondee words when measuring speech MCL and UCL.
Method
In a randomized block design, equal numbers of women and men with normal hearing, aged 18–29 years, were assigned to each of 3 experimental stimulus conditions: spondee singlets, spondee triplets, or connected discourse (n = 12 per group). Following measurement of the SRT, a modified method of limits was employed to establish, on a 7-point loudness rating scale, an ascending MCL, a descending MCL, and an ascending UCL. A single instructional set covered all loudness measurements. Test times were tracked electronically to assess clinical efficiency. All test conditions were repeated during each of 2 separate test sessions.
Results
Mean SRTs, MCLs, and UCLs across the 3 different experimental groups were found not to differ statistically or clinically (mean differences < 5 dB). Intrasession and intersession reliability for the various measures were excellent, and testing of all listeners was completed in a timely manner. In a follow-up experiment with adults with normal hearing who were only a decade older than participants in our main experiment, the older group was found to have significantly higher MCLs and UCLs.
Conclusions
Spondee words can be used routinely to obtain reliable, valid, and clinically efficient measures of MCLs and UCLs for speech, in a protocol combined with the SRT. Spondees, presented singly, yielded the greatest level of efficiency overall. Results support a recommendation to obtain an ascending measurement of MCL prior to a descending measurement and to establish the MCL by averaging the 2 values.

from #Audiology via ola Kala on Inoreader https://ift.tt/2A1RqKb
via IFTTT

The Effect of Memory Span and Manual Dexterity on Hearing Aid Handling Skills in New and Experienced Hearing Aid Users

Purpose
The aim of the study was to determine the effect of memory function and manual dexterity on new and experienced hearing aid users' abilities to use and care for their hearing aids.
Method
New and experienced hearing aid users were administered the Practical Hearing Aid Skills Test–Revised (PHAST-R; Doherty & Desjardins, 2012), a measure of a hearing aid user's ability to use and care for their hearing aids. The test was administered during their 30-day hearing aid check or yearly hearing evaluation appointment. Participants were also administered the Digit Span Test of memory function (Wechsler, 1997) and the Nine-Hole Peg Test of manual dexterity (Mathiowetz, Weber, Kashman, & Volland,1985).
Results
Participants with poorer memory span function performed significantly poorer on the PHAST-R than participants with better memory span. However, no significant relationship between manual dexterity and PHAST-R performance was observed. Experienced hearing aid users who were recently reoriented on how to use and care for their hearing aids performed significantly better on the PHAST-R compared to new hearing aid users and experienced hearing aid users who had not received a hearing aid orientation within the last year. Cleaning the hearing aid and telephone use were the 2 PHAST-R tasks that all hearing aid clients needed the most recounseling on.
Conclusion
Memory span is significantly related to an individual's ability to correctly use and care for their hearing aids regardless of whether they are new or experienced hearing aid users.

from #Audiology via ola Kala on Inoreader https://ift.tt/2PxWAlM
via IFTTT

Evaluation of a Protocol for Integrated Speech Audiometry

Purpose
This project was aimed at evaluating the reliability, validity, and clinical utility of a protocol for integrated measurements of the most comfortable level (MCL) and uncomfortable level (UCL) for speech, in combination with the speech recognition threshold (SRT). We also evaluated the validity of using spondee words when measuring speech MCL and UCL.
Method
In a randomized block design, equal numbers of women and men with normal hearing, aged 18–29 years, were assigned to each of 3 experimental stimulus conditions: spondee singlets, spondee triplets, or connected discourse (n = 12 per group). Following measurement of the SRT, a modified method of limits was employed to establish, on a 7-point loudness rating scale, an ascending MCL, a descending MCL, and an ascending UCL. A single instructional set covered all loudness measurements. Test times were tracked electronically to assess clinical efficiency. All test conditions were repeated during each of 2 separate test sessions.
Results
Mean SRTs, MCLs, and UCLs across the 3 different experimental groups were found not to differ statistically or clinically (mean differences < 5 dB). Intrasession and intersession reliability for the various measures were excellent, and testing of all listeners was completed in a timely manner. In a follow-up experiment with adults with normal hearing who were only a decade older than participants in our main experiment, the older group was found to have significantly higher MCLs and UCLs.
Conclusions
Spondee words can be used routinely to obtain reliable, valid, and clinically efficient measures of MCLs and UCLs for speech, in a protocol combined with the SRT. Spondees, presented singly, yielded the greatest level of efficiency overall. Results support a recommendation to obtain an ascending measurement of MCL prior to a descending measurement and to establish the MCL by averaging the 2 values.

from #Audiology via ola Kala on Inoreader https://ift.tt/2A1RqKb
via IFTTT

Age Effects on Concurrent Speech Segregation by Onset Asynchrony

Purpose
For elderly listeners, it is more challenging to listen to 1 voice surrounded by other voices than for young listeners. This could be caused by a reduced ability to use acoustic cues—such as slight differences in onset time—for the segregation of concurrent speech signals. Here, we study whether the ability to benefit from onset asynchrony differs between young (18–33 years) and elderly (55–74 years) listeners.
Method
We investigated young (normal hearing, N = 20) and elderly (mildly hearing impaired, N = 26) listeners' ability to segregate 2 vowels with onset asynchronies ranging from 20 to 100 ms. Behavioral measures were complemented by a specific event-related brain potential component, the object-related negativity, indicating the perception of 2 distinct auditory objects.
Results
Elderly listeners' behavioral performance (identification accuracy of the 2 vowels) was considerably poorer than young listeners'. However, both age groups showed the same amount of improvement with increasing onset asynchrony. Object-related negativity amplitude also increased similarly in both age groups.
Conclusion
Both age groups benefit to a similar extent from onset asynchrony as a cue for concurrent speech segregation during active (behavioral measurement) and during passive (electroencephalographic measurement) listening.

from #Audiology via ola Kala on Inoreader https://ift.tt/2B70L2W
via IFTTT

Parent-Implemented Communication Treatment for Infants and Toddlers With Hearing Loss: A Randomized Pilot Trial

Purpose
Despite advances in cochlear implant and hearing aid technology, many children with hearing loss continue to exhibit poorer language skills than their hearing peers. This randomized pilot trial tested the effects of a parent-implemented communication treatment targeting prelinguistic communication skills in infants and toddlers with hearing loss.
Method
Participants included 19 children between 6 and 24 months of age with moderate to profound, bilateral hearing loss. Children were randomly assigned to the parent-implemented communication treatment group or a “usual care” control group. Parents and children participated in 26, hour-long home sessions in which parents were taught to use communication support strategies. The primary outcome measures were the Communication and Symbolic Behavior Scales (Wetherby & Prizant, 2003), a measure of child prelinguistic skills, and parental use of communication support strategies during a naturalistic play session.
Results
Parents in the treatment group increased their use of communication support strategies by 17%. Children in the treatment group made statistically significant more gains in speech prelinguistic skills (d = 1.09, p = .03) as compared with the control group. There were no statistically significant differences in social and symbolic prelinguistic skills; however, the effect sizes were large (d = 0.78, p = .08; d = 0.91, p = .10).
Conclusions
This study provides modest preliminary support for the short-term effects of a parent-implemented communication treatment for children with hearing loss. Parents learned communication support strategies that subsequently impacted child prelinguistic skills. Although these results appear promising, the sample size is very small. Future research should include a larger clinical trial and child-level predictors of response to treatment.

from #Audiology via ola Kala on Inoreader https://ift.tt/2UBVVDV
via IFTTT

Age Effects on Concurrent Speech Segregation by Onset Asynchrony

Purpose
For elderly listeners, it is more challenging to listen to 1 voice surrounded by other voices than for young listeners. This could be caused by a reduced ability to use acoustic cues—such as slight differences in onset time—for the segregation of concurrent speech signals. Here, we study whether the ability to benefit from onset asynchrony differs between young (18–33 years) and elderly (55–74 years) listeners.
Method
We investigated young (normal hearing, N = 20) and elderly (mildly hearing impaired, N = 26) listeners' ability to segregate 2 vowels with onset asynchronies ranging from 20 to 100 ms. Behavioral measures were complemented by a specific event-related brain potential component, the object-related negativity, indicating the perception of 2 distinct auditory objects.
Results
Elderly listeners' behavioral performance (identification accuracy of the 2 vowels) was considerably poorer than young listeners'. However, both age groups showed the same amount of improvement with increasing onset asynchrony. Object-related negativity amplitude also increased similarly in both age groups.
Conclusion
Both age groups benefit to a similar extent from onset asynchrony as a cue for concurrent speech segregation during active (behavioral measurement) and during passive (electroencephalographic measurement) listening.

from #Audiology via ola Kala on Inoreader https://ift.tt/2B70L2W
via IFTTT

Parent-Implemented Communication Treatment for Infants and Toddlers With Hearing Loss: A Randomized Pilot Trial

Purpose
Despite advances in cochlear implant and hearing aid technology, many children with hearing loss continue to exhibit poorer language skills than their hearing peers. This randomized pilot trial tested the effects of a parent-implemented communication treatment targeting prelinguistic communication skills in infants and toddlers with hearing loss.
Method
Participants included 19 children between 6 and 24 months of age with moderate to profound, bilateral hearing loss. Children were randomly assigned to the parent-implemented communication treatment group or a “usual care” control group. Parents and children participated in 26, hour-long home sessions in which parents were taught to use communication support strategies. The primary outcome measures were the Communication and Symbolic Behavior Scales (Wetherby & Prizant, 2003), a measure of child prelinguistic skills, and parental use of communication support strategies during a naturalistic play session.
Results
Parents in the treatment group increased their use of communication support strategies by 17%. Children in the treatment group made statistically significant more gains in speech prelinguistic skills (d = 1.09, p = .03) as compared with the control group. There were no statistically significant differences in social and symbolic prelinguistic skills; however, the effect sizes were large (d = 0.78, p = .08; d = 0.91, p = .10).
Conclusions
This study provides modest preliminary support for the short-term effects of a parent-implemented communication treatment for children with hearing loss. Parents learned communication support strategies that subsequently impacted child prelinguistic skills. Although these results appear promising, the sample size is very small. Future research should include a larger clinical trial and child-level predictors of response to treatment.

from #Audiology via ola Kala on Inoreader https://ift.tt/2UBVVDV
via IFTTT

Prevalence of Publication Bias Tests in Speech, Language, and Hearing Research

Purpose
The purpose of this research note is to systematically document the extent that researchers who publish in American Speech-Language-Hearing Association (ASHA) journals search for and include unpublished literature in their meta-analyses and test for publication bias.
Method
This research note searched all ASHA peer-reviewed journals for published meta-analyses and reviewed all qualifying articles for characteristics related to the acknowledgment and assessment of publication bias.
Results
Of meta-analyses published in ASHA journals, 75% discuss publication in some form; however, less than 50% test for publication bias. Further, only 38% (n = 11) interpreted the findings of these tests.
Conclusion
Findings reveal that more attention is needed to the presence and impact of publication bias. This research note concludes with 5 recommendations for addressing publication bias.
Supplemental Material
https://doi.org/10.23641/asha.7268648

from #Audiology via ola Kala on Inoreader https://ift.tt/2B5KzQN
via IFTTT

Verb Variability and Morphosyntactic Priming With Typically Developing 2- and 3-Year-Olds

Purpose
This study was specifically designed to examine how verb variability and verb overlap in a morphosyntactic priming task affect typically developing children's use and generalization of auxiliary IS.
Method
Forty typically developing 2- to 3-year-old native English-speaking children with inconsistent auxiliary IS production were primed with 24 present progressive auxiliary IS sentences. Half of the children heard auxiliary IS primes with 24 unique verbs (high variability). The other half heard auxiliary IS primes with only 6 verbs, repeated 4 times each (low variability). In addition, half of the children heard prime–target pairs with overlapping verbs (lexical boost), whereas the other half heard prime–target pairs with nonoverlapping verbs (no lexical boost). To assess use and generalization of the targeted structure to untrained verbs, all children described probe items at baseline and 5 min and 24 hr after the priming task.
Results
Children in the high variability group demonstrated strong priming effects during the task and increased auxiliary IS production compared with baseline performance 5 min and 24 hr after the priming task, suggesting learning and generalization of the primed structure. Children in the low variability group showed no significant increases in auxiliary IS production and fell significantly below the high variability group in the 24-hr posttest. Verb overlap did not boost priming effects during the priming task or in posttest probes.
Conclusions
Typically developing children do indeed make use of lexical variability in their linguistic input to help them extract and generalize abstract grammatical rules. They can do this quite quickly, with relatively stable, robust learning occurring after a single optimally variable input session. With reduced variability, learning does not occur.

from #Audiology via ola Kala on Inoreader https://ift.tt/2CXiUme
via IFTTT

Frequencies in Perception and Production Differentially Affect Child Speech

Purpose
Frequent sounds and frequent words are both acquired at an earlier age and are produced by children more accurately. Recent research suggests that frequency is not always a facilitative concept, however. Interactions between input frequency in perception and practice frequency in production may limit or inhibit growth. In this study, we consider how a range of input frequencies affect production accuracy and referent identification.
Method
Thirty-three typically developing 3- and 4-year-olds participated in a novel word-learning task. In the initial test block, participants heard nonwords 1, 3, 6, or 10 times—produced either by a single talker or by multiple talkers—and then produced them immediately. In a posttest, participants heard all nonwords just once and then produced them. Referent identification was probed in between the test and posttest.
Results
Production accuracy was most clearly facilitated by an input frequency of 3 during the test block. Input frequency interacted with production practice, and the facilitative effect of input frequency did not carry over to the posttest. Talker variability did not affect accuracy, regardless of input frequency. The referent identification results did not favor talker variability or a particular input frequency value, but participants were able to learn the words at better than chance levels.
Conclusions
The results confirm that the input can be facilitative, but input frequency and production practice interact in ways that limit input-based learning, and more input is not always better. Future research on this interaction may allow clinicians to optimize various types of frequency commonly used during therapy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rn6Sf0
via IFTTT

The Effects of Static and Moving Spectral Ripple Sensitivity on Unaided and Aided Speech Perception in Noise

Purpose
This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes.
Method
A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory–visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise.
Results
A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding.
Conclusions
The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rgXycC
via IFTTT

School-Aged Children's Phonological Accuracy in Multisyllabic Words on a Whole-Word Metric

Purpose
The purpose of this study is to examine differences in phonological accuracy in multisyllabic words (MSWs) on a whole-word metric, longitudinally and cross-sectionally, for elementary school–aged children with typical development (TD) and with history of protracted phonological development (PPD).
Method
Three mismatch subtotals, Lexical influence, Word Structure, and segmental Features (forming a Whole Word total), were evaluated in 3 multivariate analyses: (a) a longitudinal comparison (n = 22), at age 5 and 8 years; (b) a cross-sectional comparison of 8- to 10-year-olds (n = 12 per group) with TD and with history of PPD; and (c) a comparison of the group with history of PPD (n = 12) with a larger 5-year-old group (n = 62).
Results
Significant effect sizesp 2) found for mismatch totals were as follows: (a) moderate (Lexical, Structure) and large (Features) between ages 5 and 8 to 10 years, mismatch frequency decreasing developmentally, and (b) large between 8- to 10-year-olds with TD and with history of PPD (Structure, Features; minimal lexical influences), in favor of participants with TD. Mismatch frequencies were equivalent for 8- to 10-year-olds with history of PPD and 5-year-olds with TD. Classification accuracy in original subgroupings was 100% and 91% for 8- to 10-year-olds with TD and with history of PPD, respectively, and 86% for 5-year-olds with TD.
Conclusion
Phonological accuracy in MSW production was differentiated for elementary school–aged children with TD and PPD, using a whole-word metric. To assist with the identification of children with ongoing PPD, the metric has the ability to detect weaknesses and track progress in global MSW phonological production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2ra3VOA
via IFTTT

A Multimethod Analysis of Pragmatic Skills in Children and Adolescents With Fragile X Syndrome, Autism Spectrum Disorder, and Down Syndrome

Purpose
Pragmatic language skills are often impaired above and beyond general language delays in individuals with neurodevelopmental disabilities. This study used a multimethod approach to language sample analysis to characterize syndrome- and sex-specific profiles across different neurodevelopmental disabilities and to examine the congruency of 2 analysis techniques.
Method
Pragmatic skills of young males and females with fragile X syndrome with autism spectrum disorder (FXS-ASD, n = 61) and without autism spectrum disorder (FXS-O, n = 40), Down syndrome (DS, n = 42), and typical development (TD, n = 37) and males with idiopathic autism spectrum disorder only (ASD-O, n = 29) were compared using variables obtained from a detailed hand-coding system contrasted with similar variables obtained automatically from the language analysis program Systematic Analysis of Language Transcripts (SALT).
Results
Noncontingent language and perseveration were characteristic of the pragmatic profiles of boys and girls with FXS-ASD and boys with ASD-O. Boys with ASD-O also initiated turns less often and were more nonresponsive than other groups, and girls with FXS-ASD were more nonresponsive than their male counterparts. Hand-coding and SALT methods were largely convergent with some exceptions.
Conclusion
Results suggest both similarities and differences in the pragmatic profiles observed across different neurodevelopmental disabilities, including idiopathic and FXS-associated cases of ASD, as well as an important sex difference in FXS-ASD. These findings and congruency between the 2 language sample analysis techniques together have important implications for assessment and intervention efforts.

from #Audiology via ola Kala on Inoreader https://ift.tt/2QstjKt
via IFTTT

Individualized Patient Vocal Priorities for Tailored Therapy

Purpose
The purposes of this study are to introduce the concept of vocal priorities based on acoustic correlates, to develop an instrument to determine these vocal priorities, and to analyze the pattern of vocal priorities in patients with voice disorders.
Method
Questions probing the importance of 5 vocal attributes (vocal clarity, loudness, mean speaking pitch, pitch range, vocal endurance) were generated from consensus conference involving speech-language pathologists, laryngologists, and voice scientists, as well as patient feedback. The responses to the preliminary items from 213 subjects were subjected to exploratory factor analysis, which confirmed 4 of the predefined domains. The final instrument consisted of a 16-item Vocal Priority Questionnaire probing the relative importance of clarity, loudness, mean speaking pitch, and pitch range.
Results
The Vocal Priority Questionnaire had high reliability (Cronbach's α = .824) and good construct validity. A majority of the cohort (61%) ranked vocal clarity as their highest vocal priority, and 20%, 12%, and 7% ranked loudness, mean speaking pitch, and pitch range, respectively, as their highest priority. The frequencies of the highest ranked priorities did not differ by voice diagnosis or by sex. Considerable individual variation in vocal priorities existed within these large trends.
Conclusions
A patient's vocal priorities can be identified and taken into consideration in planning behavioral or surgical intervention for a voice disorder. Inclusion of vocal priorities in treatment planning empowers the patient in shared decision making, helps the clinician tailor treatment, and may also improve therapy compliance.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4QfG
via IFTTT

Basic Measures of Prosody in Spontaneous Speech of Children With Early and Late Cochlear Implantation

Purpose
Relative to normally hearing (NH) peers, the speech of children with cochlear implants (CIs) has been found to have deviations such as a high fundamental frequency, elevated jitter and shimmer, and inadequate intonation. However, two important dimensions of prosody (temporal and spectral) have not been systematically investigated. Given that, in general, the resolution in CI hearing is best for the temporal dimension and worst for the spectral dimension, we expected this hierarchy to be reflected in the amount of CI speech's deviation from NH speech. Deviations, however, were expected to diminish with increasing device experience.
Method
Of 9 Dutch early- and late-implanted (division at 2 years of age) children and 12 hearing age-matched NH controls, spontaneous speech was recorded at 18, 24, and 30 months after implantation (CI) or birth (NH). Six spectral and temporal outcome measures were compared between groups, sessions, and genders.
Results
On most measures, interactions of Group and/or Gender with Session were significant. For CI recipients as compared with controls, performance on temporal measures was not in general more deviant than spectral measures, although differences were found for individual measures. The late-implanted group had a tendency to be closer to the NH group than the early-implanted group. Groups converged over time.
Conclusions
Results did not support the phonetic dimension hierarchy hypothesis, suggesting that the appropriateness of the production of basic prosodic measures does not depend on auditory resolution. Rather, it seems to depend on the amount of control necessary for speech production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rjDqqj
via IFTTT

The Coexistence of Disabling Conditions in Children Who Stutter: Evidence From the National Health Interview Survey

Purpose
Stuttering is a disorder that has been associated with coexisting developmental disorders. To date, detailed descriptions of the coexistence of such conditions have not consistently emerged in the literature. Identifying and understanding these conditions can be important to the overall management of children who stutter (CWS). The objective of this study was to generate a profile of the existence of disabling developmental conditions among CWS using national data.
Method
Six years of data from the National Health Interview Survey (2010–2015) were analyzed for this project. The sample consisted of children whose respondents clearly indicated the presence or absence of stuttering. Chi-square tests of independence were used for comparing categorical variables; and independent-samples t tests, for comparing continuous variables. Multiple logistic regression analyses were used for determining the odds of having a coexisting disabling developmental condition.
Results
This study sample included 62,450 children, of which 1,231 were CWS. Overall, the presence of at least 1 disabling developmental condition was 5.5 times higher in CWS when compared with children who do not stutter. The presence of stuttering was also associated with higher odds of each of the following coexisting developmental conditions: intellectual disability (odds ratio [OR] = 6.67, p < .001), learning disability (OR = 5.45, p < .001), attention-deficit hyperactivity disorder/attention-deficit disorder (OR = 3.09, p < .001), seizures (OR = 7.52, p < .001), autism/Asperger's/pervasive developmental disorder (OR = 5.48, p < .001), and any other developmental delay (OR = 7.10, p < .001).
Conclusion
Evidence from the National Health Interview Survey suggests a higher prevalence of coexisting developmental disabilities in CWS. The existence of coexisting disabling developmental conditions should be considered as part of an overall management plan for CWS.

from #Audiology via ola Kala on Inoreader https://ift.tt/2AOZPRG
via IFTTT

Data-Driven Classification of Dysarthria Profiles in Children With Cerebral Palsy

Purpose
The objectives of this study were to examine different speech profiles among children with dysarthria secondary to cerebral palsy (CP) and to characterize the effect of different speech profiles on intelligibility.
Method
Twenty 5-year-old children with dysarthria secondary to CP and 20 typically developing children were included in this study. Six acoustic and perceptual speech measures were selected to quantify a range of segmental and suprasegmental speech characteristics and were measured from children's sentence productions. Hierarchical cluster analysis was used to identify naturally occurring subgroups of children who had similar profiles of speech features.
Results
Results revealed 4 naturally occurring speech clusters among children: 1 cluster of children with typical development and 3 clusters of children with dysarthria secondary to CP. Two of the 3 dysarthria clusters had statistically equivalent intelligibility levels but significantly differed in articulation rate and degree of hypernasality.
Conclusion
This study provides initial evidence that different speech profiles exist among 5-year-old children with dysarthria secondary to CP, even among children with similar intelligibility levels, suggesting the potential for developing a pediatric dysarthria classification system that could be used to stratify children with dysarthria into meaningful subgroups for studying speech motor development and efficacy of interventions.

from #Audiology via ola Kala on Inoreader https://ift.tt/2r8Gaqv
via IFTTT

Identification of Affective State Change in Adults With Aphasia Using Speech Acoustics

Purpose
The current study aimed to identify objective acoustic measures related to affective state change in the speech of adults with post-stroke aphasia.
Method
The speech of 20 post-stroke adults with aphasia was recorded during picture description and administration of the Western Aphasia Battery–Revised (Kertesz, 2006). In addition, participants completed the Self-Assessment Manikin (Bradley & Lang, 1994) and the Stress Scale (Tobii Dynavox, 1981–2016) before and after the language tasks. Speech from each participant was used to detect a change in affective state test scores between the beginning and ending speech.
Results
Machine learning revealed moderate success in classifying depression, minimal success in predicting depression and stress numeric scores, and minimal success in classifying changes in affective state class between the beginning and ending speech.
Conclusions
The results suggest the existence of objectively measurable aspects of speech that may be used to identify changes in acute affect from adults with aphasia. This work is exploratory and hypothesis-generating; more work will be needed to make conclusive claims. Further work in this area could lead to automated tools to assist clinicians with their diagnoses of stress, depression, and other forms of affect in adults with aphasia.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FLoZ8s
via IFTTT

Language Skill Mediates the Relationship Between Language Load and Articulatory Variability in Children With Language and Speech Sound Disorders

Purpose
The aim of the study was to investigate the relationship between language load and articulatory variability in children with language and speech sound disorders, including childhood apraxia of speech.
Method
Forty-six children, ages 48–92 months, participated in the current study, including children with speech sound disorder, developmental language disorder (aka specific language impairment), childhood apraxia of speech, and typical development. Children imitated (low language load task) then retrieved (high language load task) agent + action phrases. Articulatory variability was quantified using speech kinematics. We assessed language status and speech status (typical vs. impaired) in relation to articulatory variability.
Results
All children showed increased articulatory variability in the retrieval task compared with the imitation task. However, only children with language impairment showed a disproportionate increase in articulatory variability in the retrieval task relative to peers with typical language skills.
Conclusion
Higher-level language processes affect lower-level speech motor control processes, and this relationship appears to be more strongly mediated by language than speech skill.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G1UThf
via IFTTT

An Eye-Tracking Study of Receptive Verb Knowledge in Toddlers

Purpose
We examined receptive verb knowledge in 22- to 24-month-old toddlers with a dynamic video eye-tracking test. The primary goal of the study was to examine the utility of eye-gaze measures that are commonly used to study noun knowledge for studying verb knowledge.
Method
Forty typically developing toddlers participated. They viewed 2 videos side by side (e.g., girl clapping, same girl stretching) and were asked to find one of them (e.g., “Where is she clapping?”). Their eye-gaze, recorded by a Tobii T60XL eye-tracking system, was analyzed as a measure of their knowledge of the verb meanings. Noun trials were included as controls. We examined correlations between eye-gaze measures and score on the MacArthur–Bates Communicative Development Inventories (CDI; Fenson et al., 1994), a standard parent report measure of expressive vocabulary to see how well various eye-gaze measures predicted CDI score.
Results
A common measure of knowledge—a 15% increase in looking time to the target video from a baseline phase to the test phase—did correlate with CDI score but operationalized differently for verbs than for nouns. A 2nd common measure, latency of 1st look to the target, correlated with CDI score for nouns, as in previous work, but did not for verbs. A 3rd measure, fixation density, correlated for both nouns and verbs, although the correlation went in different directions.
Conclusions
The dynamic nature of videos depicting verb knowledge results in differences in eye-gaze as compared to static images depicting nouns. An eye-tracking assessment of verb knowledge is worthwhile to develop. However, the particular dependent measures used may be different than those used for static images and nouns.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rkGZfG
via IFTTT

The Relationship Between Non-Orthographic Language Abilities and Reading Performance in Chronic Aphasia: An Exploration of the Primary Systems Hypothesis

Purpose
This study investigated the relationship between non-orthographic language abilities and reading in order to examine assumptions of the primary systems hypothesis and further our understanding of language processing poststroke.
Method
Performance on non-orthographic semantic, phonologic, and syntactic tasks, as well as oral reading and reading comprehension tasks, was assessed in 43 individuals with aphasia. Correlation and regression analyses were conducted to determine the relationship between these measures. In addition, analyses of variance examined differences within and between reading groups (within normal limits, phonological, deep, or global alexia).
Results
Results showed that non-orthographic language abilities were significantly related to reading abilities. Semantics was most predictive of regular and irregular word reading, whereas phonology was most predictive of pseudohomophone and nonword reading. Written word and paragraph comprehension were primarily supported by semantics, whereas written sentence comprehension was related to semantic, phonologic, and syntactic performance. Finally, severity of alexia was found to reflect severity of semantic and phonologic impairment.
Conclusions
Findings support the primary systems view of language by showing that non-orthographic language abilities and reading abilities are closely linked. This preliminary work requires replication and extension; however, current results highlight the importance of routine, integrated assessment and treatment of spoken and written language in aphasia.
Supplemental Material
https://doi.org/10.23641/asha.7403963

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4LZq
via IFTTT

Modifying and Validating a Measure of Chronic Stress for People With Aphasia

Purpose
Chronic stress is likely a common experience among people with the language impairment of aphasia. Importantly, chronic stress reportedly alters the neural networks central to learning and memory—essential ingredients of aphasia rehabilitation. Before we can explore the influence of chronic stress on rehabilitation outcomes, we must be able to measure chronic stress in this population. The purpose of this study was to (a) modify a widely used measure of chronic stress (Perceived Stress Scale [PSS]; Cohen & Janicki-Deverts, 2012) to fit the communication needs of people with aphasia (PWA) and (b) validate the modified PSS (mPSS) with PWA.
Method
Following systematic modification of the PSS (with permission), 72 PWA completed the validation portion of the study. Each participant completed the mPSS, measures of depression, anxiety, and resilience, and provided a sample of the stress hormone cortisol extracted from the hair. Pearson's product–moment correlations were used to examine associations between mPSS scores and these measures. Approximately 30% of participants completed the mPSS 1 week later to establish test–retest reliability, analyzed using an interclass correlation coefficient.
Results
Significant positive correlations were evident between the reports of chronic stress and depression and anxiety. In addition, a significant inverse correlation was found between reports of chronic stress and resilience. The mPSS also showed evidence of test–retest reliability. No association was found between mPSS score and cortisol level.
Conclusion
Although questions remain about the biological correlates of chronic stress in people with poststroke aphasia, significant associations between chronic stress and several psychosocial variables provide evidence of validity of this emerging measure of chronic stress.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G005SG
via IFTTT

Sensitivity to Morphosyntactic Information in Preschool Children With and Without Developmental Language Disorder: A Follow-Up Study

Purpose
This study tested children's sensitivity to tense/agreement information in fronted auxiliaries during online comprehension of questions (e.g., Are the nice little dogs running?). Data from children with developmental language disorder (DLD) were compared to previously published data from typically developing (TD) children matched according to sentence comprehension test scores.
Method
Fifteen 5-year-old children with DLD and fifteen 3-year-old TD children participated in a looking-while-listening task. Children viewed pairs of pictures, 1 with a single agent and 1 with multiple agents, accompanied by a sentence with a fronted auxiliary (is + single agent or are + two agents) or a control sentence. Proportion looking to the target was measured.
Results
Children with DLD did not show anticipatory looking based on the number information contained in the auxiliary (is or are) as the younger TD children had. Both groups showed significant increases in looking to the target upon hearing the subject noun (e.g., dogs).
Conclusions
Despite the groups' similar sentence comprehension abilities and ability to accurately respond to the information provided by the subject noun, children with DLD did not show sensitivity to number information on the fronted auxiliary. This insensitivity is considered in light of these children's weaker command of tense/agreement forms in their speech. Specifically, we consider the possibility that failure to grasp the relation between the subject–verb sequence (e.g., dogs running) and preceding information (e.g., are) in questions in the input contributes to the protracted inconsistency in producing auxiliary forms in obligatory contexts by children with DLD.
Supplemental Material
https://doi.org/10.23641/asha.7283459

from #Audiology via ola Kala on Inoreader https://ift.tt/2OKCrbL
via IFTTT

Structural Relationship Between Cognitive Processing and Syntactic Sentence Comprehension in Children With and Without Developmental Language Disorder

Purpose
We assessed the potential direct and indirect (mediated) influences of 4 cognitive mechanisms we believe are theoretically relevant to canonical and noncanonical sentence comprehension of school-age children with and without developmental language disorder (DLD).
Method
One hundred seventeen children with DLD and 117 propensity-matched typically developing (TD) children participated. Comprehension was indexed by children identifying the agent in implausible sentences. Children completed cognitive tasks indexing the latent predictors of fluid reasoning (FLD-R), controlled attention (CATT), complex working memory (cWM), and long-term memory language knowledge (LTM-LK).
Results
Structural equation modeling revealed that the best model fit was an indirect model in which cWM mediated the relationship among FLD-R, CATT, LTM-LK, and sentence comprehension. For TD children, comprehension of both sentence types was indirectly influenced by FLD-R (pattern recognition) and LTM-LK (linguistic chunking). For children with DLD, canonical sentence comprehension was indirectly influenced by LTM-LK and CATT, and noncanonical comprehension was indirectly influenced just by CATT.
Conclusions
cWM mediates sentence comprehension in children with DLD and TD children. For TD children, comprehension occurs automatically through pattern recognition and linguistic chunking. For children with DLD, comprehension is cognitively effortful. Whereas canonical comprehension occurs through chunking, noncanonical comprehension develops on a word-by-word basis.
Supplemental Material
https://doi.org/10.23641/asha.7178939

from #Audiology via ola Kala on Inoreader https://ift.tt/2P3IWvi
via IFTTT

Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech

Purpose
Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input.
Method
Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions). Performance was analyzed not only in terms of traditional mean RTs but also in terms of the faster versus slower RTs (defined by the 1st vs. 3rd quartiles of RT distributions). These time regions were conceptualized respectively as reflecting optimal detection with efficient focused attention versus less optimal detection with inefficient focused attention due to attentional lapses.
Results
Mean RTs indicated better detection (a) of multisensory AV speech than A speech only in 4- to 5-year-olds and (b) of A and AV inputs than V input in all age groups. The faster RTs revealed that AV input did not improve detection in any group. The slower RTs indicated that (a) the processing of silent V input was significantly faster for the articulating than static face and (b) AV speech or facial input significantly minimized attentional lapses in all groups except 6- to 7-year-olds (a peaked U-shaped curve). Apparently, the AV benefit observed for mean performance in 4- to 5-year-olds arose from effects of attention.
Conclusions
The faster RTs indicated that AV input did not enhance detection in any group, but the slower RTs indicated that AV speech and dynamic V speech (mouthing) significantly minimized attentional lapses and thus did influence performance. Overall, A and AV inputs were detected consistently faster than V input; this result endorsed stimulus-bound auditory processing by these children.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rk5Y2H
via IFTTT

Relations Between Teacher Talk Characteristics and Child Language in Spoken-Language Deaf and Hard-of-Hearing Classrooms

Purpose
The aim of this study was to examine relations between teachers' conversational techniques and language gains made by their deaf and hard-of-hearing students. Specifically, we considered teachers' reformulations of child utterances, language elicitations, explicit vocabulary and syntax instruction, and wait time.
Method
This was an observational, longitudinal study that examined the characteristics of teacher talk in 25 kindergarten through second-grade classrooms of 68 deaf and hard-of-hearing children who used spoken English. Standardized assessments provided measures of child vocabulary and morphosyntax in the fall and spring of a school year. Characteristics of teacher talk were coded from classroom video recordings during the winter of that year.
Results
Hierarchical linear modeling indicated that reformulating child statements and explicitly teaching vocabulary were significant predictors of child vocabulary gains across a school year. Explicitly teaching vocabulary also significantly predicted gains in morphosyntax abilities. There were wide individual differences in the teachers' use of these conversational techniques.
Conclusion
Reformulation and explicit vocabulary instruction may be areas where training can help teachers improve, and improvements in the teachers' talk may benefit their students.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Fg1IeK
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader https://ift.tt/2L8oxjG
via IFTTT

Prevalence of Publication Bias Tests in Speech, Language, and Hearing Research

Purpose
The purpose of this research note is to systematically document the extent that researchers who publish in American Speech-Language-Hearing Association (ASHA) journals search for and include unpublished literature in their meta-analyses and test for publication bias.
Method
This research note searched all ASHA peer-reviewed journals for published meta-analyses and reviewed all qualifying articles for characteristics related to the acknowledgment and assessment of publication bias.
Results
Of meta-analyses published in ASHA journals, 75% discuss publication in some form; however, less than 50% test for publication bias. Further, only 38% (n = 11) interpreted the findings of these tests.
Conclusion
Findings reveal that more attention is needed to the presence and impact of publication bias. This research note concludes with 5 recommendations for addressing publication bias.
Supplemental Material
https://doi.org/10.23641/asha.7268648

from #Audiology via ola Kala on Inoreader https://ift.tt/2B5KzQN
via IFTTT

Verb Variability and Morphosyntactic Priming With Typically Developing 2- and 3-Year-Olds

Purpose
This study was specifically designed to examine how verb variability and verb overlap in a morphosyntactic priming task affect typically developing children's use and generalization of auxiliary IS.
Method
Forty typically developing 2- to 3-year-old native English-speaking children with inconsistent auxiliary IS production were primed with 24 present progressive auxiliary IS sentences. Half of the children heard auxiliary IS primes with 24 unique verbs (high variability). The other half heard auxiliary IS primes with only 6 verbs, repeated 4 times each (low variability). In addition, half of the children heard prime–target pairs with overlapping verbs (lexical boost), whereas the other half heard prime–target pairs with nonoverlapping verbs (no lexical boost). To assess use and generalization of the targeted structure to untrained verbs, all children described probe items at baseline and 5 min and 24 hr after the priming task.
Results
Children in the high variability group demonstrated strong priming effects during the task and increased auxiliary IS production compared with baseline performance 5 min and 24 hr after the priming task, suggesting learning and generalization of the primed structure. Children in the low variability group showed no significant increases in auxiliary IS production and fell significantly below the high variability group in the 24-hr posttest. Verb overlap did not boost priming effects during the priming task or in posttest probes.
Conclusions
Typically developing children do indeed make use of lexical variability in their linguistic input to help them extract and generalize abstract grammatical rules. They can do this quite quickly, with relatively stable, robust learning occurring after a single optimally variable input session. With reduced variability, learning does not occur.

from #Audiology via ola Kala on Inoreader https://ift.tt/2CXiUme
via IFTTT

Frequencies in Perception and Production Differentially Affect Child Speech

Purpose
Frequent sounds and frequent words are both acquired at an earlier age and are produced by children more accurately. Recent research suggests that frequency is not always a facilitative concept, however. Interactions between input frequency in perception and practice frequency in production may limit or inhibit growth. In this study, we consider how a range of input frequencies affect production accuracy and referent identification.
Method
Thirty-three typically developing 3- and 4-year-olds participated in a novel word-learning task. In the initial test block, participants heard nonwords 1, 3, 6, or 10 times—produced either by a single talker or by multiple talkers—and then produced them immediately. In a posttest, participants heard all nonwords just once and then produced them. Referent identification was probed in between the test and posttest.
Results
Production accuracy was most clearly facilitated by an input frequency of 3 during the test block. Input frequency interacted with production practice, and the facilitative effect of input frequency did not carry over to the posttest. Talker variability did not affect accuracy, regardless of input frequency. The referent identification results did not favor talker variability or a particular input frequency value, but participants were able to learn the words at better than chance levels.
Conclusions
The results confirm that the input can be facilitative, but input frequency and production practice interact in ways that limit input-based learning, and more input is not always better. Future research on this interaction may allow clinicians to optimize various types of frequency commonly used during therapy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rn6Sf0
via IFTTT

The Effects of Static and Moving Spectral Ripple Sensitivity on Unaided and Aided Speech Perception in Noise

Purpose
This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes.
Method
A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory–visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise.
Results
A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding.
Conclusions
The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rgXycC
via IFTTT

School-Aged Children's Phonological Accuracy in Multisyllabic Words on a Whole-Word Metric

Purpose
The purpose of this study is to examine differences in phonological accuracy in multisyllabic words (MSWs) on a whole-word metric, longitudinally and cross-sectionally, for elementary school–aged children with typical development (TD) and with history of protracted phonological development (PPD).
Method
Three mismatch subtotals, Lexical influence, Word Structure, and segmental Features (forming a Whole Word total), were evaluated in 3 multivariate analyses: (a) a longitudinal comparison (n = 22), at age 5 and 8 years; (b) a cross-sectional comparison of 8- to 10-year-olds (n = 12 per group) with TD and with history of PPD; and (c) a comparison of the group with history of PPD (n = 12) with a larger 5-year-old group (n = 62).
Results
Significant effect sizesp 2) found for mismatch totals were as follows: (a) moderate (Lexical, Structure) and large (Features) between ages 5 and 8 to 10 years, mismatch frequency decreasing developmentally, and (b) large between 8- to 10-year-olds with TD and with history of PPD (Structure, Features; minimal lexical influences), in favor of participants with TD. Mismatch frequencies were equivalent for 8- to 10-year-olds with history of PPD and 5-year-olds with TD. Classification accuracy in original subgroupings was 100% and 91% for 8- to 10-year-olds with TD and with history of PPD, respectively, and 86% for 5-year-olds with TD.
Conclusion
Phonological accuracy in MSW production was differentiated for elementary school–aged children with TD and PPD, using a whole-word metric. To assist with the identification of children with ongoing PPD, the metric has the ability to detect weaknesses and track progress in global MSW phonological production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2ra3VOA
via IFTTT

A Multimethod Analysis of Pragmatic Skills in Children and Adolescents With Fragile X Syndrome, Autism Spectrum Disorder, and Down Syndrome

Purpose
Pragmatic language skills are often impaired above and beyond general language delays in individuals with neurodevelopmental disabilities. This study used a multimethod approach to language sample analysis to characterize syndrome- and sex-specific profiles across different neurodevelopmental disabilities and to examine the congruency of 2 analysis techniques.
Method
Pragmatic skills of young males and females with fragile X syndrome with autism spectrum disorder (FXS-ASD, n = 61) and without autism spectrum disorder (FXS-O, n = 40), Down syndrome (DS, n = 42), and typical development (TD, n = 37) and males with idiopathic autism spectrum disorder only (ASD-O, n = 29) were compared using variables obtained from a detailed hand-coding system contrasted with similar variables obtained automatically from the language analysis program Systematic Analysis of Language Transcripts (SALT).
Results
Noncontingent language and perseveration were characteristic of the pragmatic profiles of boys and girls with FXS-ASD and boys with ASD-O. Boys with ASD-O also initiated turns less often and were more nonresponsive than other groups, and girls with FXS-ASD were more nonresponsive than their male counterparts. Hand-coding and SALT methods were largely convergent with some exceptions.
Conclusion
Results suggest both similarities and differences in the pragmatic profiles observed across different neurodevelopmental disabilities, including idiopathic and FXS-associated cases of ASD, as well as an important sex difference in FXS-ASD. These findings and congruency between the 2 language sample analysis techniques together have important implications for assessment and intervention efforts.

from #Audiology via ola Kala on Inoreader https://ift.tt/2QstjKt
via IFTTT

Individualized Patient Vocal Priorities for Tailored Therapy

Purpose
The purposes of this study are to introduce the concept of vocal priorities based on acoustic correlates, to develop an instrument to determine these vocal priorities, and to analyze the pattern of vocal priorities in patients with voice disorders.
Method
Questions probing the importance of 5 vocal attributes (vocal clarity, loudness, mean speaking pitch, pitch range, vocal endurance) were generated from consensus conference involving speech-language pathologists, laryngologists, and voice scientists, as well as patient feedback. The responses to the preliminary items from 213 subjects were subjected to exploratory factor analysis, which confirmed 4 of the predefined domains. The final instrument consisted of a 16-item Vocal Priority Questionnaire probing the relative importance of clarity, loudness, mean speaking pitch, and pitch range.
Results
The Vocal Priority Questionnaire had high reliability (Cronbach's α = .824) and good construct validity. A majority of the cohort (61%) ranked vocal clarity as their highest vocal priority, and 20%, 12%, and 7% ranked loudness, mean speaking pitch, and pitch range, respectively, as their highest priority. The frequencies of the highest ranked priorities did not differ by voice diagnosis or by sex. Considerable individual variation in vocal priorities existed within these large trends.
Conclusions
A patient's vocal priorities can be identified and taken into consideration in planning behavioral or surgical intervention for a voice disorder. Inclusion of vocal priorities in treatment planning empowers the patient in shared decision making, helps the clinician tailor treatment, and may also improve therapy compliance.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4QfG
via IFTTT

Basic Measures of Prosody in Spontaneous Speech of Children With Early and Late Cochlear Implantation

Purpose
Relative to normally hearing (NH) peers, the speech of children with cochlear implants (CIs) has been found to have deviations such as a high fundamental frequency, elevated jitter and shimmer, and inadequate intonation. However, two important dimensions of prosody (temporal and spectral) have not been systematically investigated. Given that, in general, the resolution in CI hearing is best for the temporal dimension and worst for the spectral dimension, we expected this hierarchy to be reflected in the amount of CI speech's deviation from NH speech. Deviations, however, were expected to diminish with increasing device experience.
Method
Of 9 Dutch early- and late-implanted (division at 2 years of age) children and 12 hearing age-matched NH controls, spontaneous speech was recorded at 18, 24, and 30 months after implantation (CI) or birth (NH). Six spectral and temporal outcome measures were compared between groups, sessions, and genders.
Results
On most measures, interactions of Group and/or Gender with Session were significant. For CI recipients as compared with controls, performance on temporal measures was not in general more deviant than spectral measures, although differences were found for individual measures. The late-implanted group had a tendency to be closer to the NH group than the early-implanted group. Groups converged over time.
Conclusions
Results did not support the phonetic dimension hierarchy hypothesis, suggesting that the appropriateness of the production of basic prosodic measures does not depend on auditory resolution. Rather, it seems to depend on the amount of control necessary for speech production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rjDqqj
via IFTTT

The Coexistence of Disabling Conditions in Children Who Stutter: Evidence From the National Health Interview Survey

Purpose
Stuttering is a disorder that has been associated with coexisting developmental disorders. To date, detailed descriptions of the coexistence of such conditions have not consistently emerged in the literature. Identifying and understanding these conditions can be important to the overall management of children who stutter (CWS). The objective of this study was to generate a profile of the existence of disabling developmental conditions among CWS using national data.
Method
Six years of data from the National Health Interview Survey (2010–2015) were analyzed for this project. The sample consisted of children whose respondents clearly indicated the presence or absence of stuttering. Chi-square tests of independence were used for comparing categorical variables; and independent-samples t tests, for comparing continuous variables. Multiple logistic regression analyses were used for determining the odds of having a coexisting disabling developmental condition.
Results
This study sample included 62,450 children, of which 1,231 were CWS. Overall, the presence of at least 1 disabling developmental condition was 5.5 times higher in CWS when compared with children who do not stutter. The presence of stuttering was also associated with higher odds of each of the following coexisting developmental conditions: intellectual disability (odds ratio [OR] = 6.67, p < .001), learning disability (OR = 5.45, p < .001), attention-deficit hyperactivity disorder/attention-deficit disorder (OR = 3.09, p < .001), seizures (OR = 7.52, p < .001), autism/Asperger's/pervasive developmental disorder (OR = 5.48, p < .001), and any other developmental delay (OR = 7.10, p < .001).
Conclusion
Evidence from the National Health Interview Survey suggests a higher prevalence of coexisting developmental disabilities in CWS. The existence of coexisting disabling developmental conditions should be considered as part of an overall management plan for CWS.

from #Audiology via ola Kala on Inoreader https://ift.tt/2AOZPRG
via IFTTT

Data-Driven Classification of Dysarthria Profiles in Children With Cerebral Palsy

Purpose
The objectives of this study were to examine different speech profiles among children with dysarthria secondary to cerebral palsy (CP) and to characterize the effect of different speech profiles on intelligibility.
Method
Twenty 5-year-old children with dysarthria secondary to CP and 20 typically developing children were included in this study. Six acoustic and perceptual speech measures were selected to quantify a range of segmental and suprasegmental speech characteristics and were measured from children's sentence productions. Hierarchical cluster analysis was used to identify naturally occurring subgroups of children who had similar profiles of speech features.
Results
Results revealed 4 naturally occurring speech clusters among children: 1 cluster of children with typical development and 3 clusters of children with dysarthria secondary to CP. Two of the 3 dysarthria clusters had statistically equivalent intelligibility levels but significantly differed in articulation rate and degree of hypernasality.
Conclusion
This study provides initial evidence that different speech profiles exist among 5-year-old children with dysarthria secondary to CP, even among children with similar intelligibility levels, suggesting the potential for developing a pediatric dysarthria classification system that could be used to stratify children with dysarthria into meaningful subgroups for studying speech motor development and efficacy of interventions.

from #Audiology via ola Kala on Inoreader https://ift.tt/2r8Gaqv
via IFTTT

Identification of Affective State Change in Adults With Aphasia Using Speech Acoustics

Purpose
The current study aimed to identify objective acoustic measures related to affective state change in the speech of adults with post-stroke aphasia.
Method
The speech of 20 post-stroke adults with aphasia was recorded during picture description and administration of the Western Aphasia Battery–Revised (Kertesz, 2006). In addition, participants completed the Self-Assessment Manikin (Bradley & Lang, 1994) and the Stress Scale (Tobii Dynavox, 1981–2016) before and after the language tasks. Speech from each participant was used to detect a change in affective state test scores between the beginning and ending speech.
Results
Machine learning revealed moderate success in classifying depression, minimal success in predicting depression and stress numeric scores, and minimal success in classifying changes in affective state class between the beginning and ending speech.
Conclusions
The results suggest the existence of objectively measurable aspects of speech that may be used to identify changes in acute affect from adults with aphasia. This work is exploratory and hypothesis-generating; more work will be needed to make conclusive claims. Further work in this area could lead to automated tools to assist clinicians with their diagnoses of stress, depression, and other forms of affect in adults with aphasia.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FLoZ8s
via IFTTT

Language Skill Mediates the Relationship Between Language Load and Articulatory Variability in Children With Language and Speech Sound Disorders

Purpose
The aim of the study was to investigate the relationship between language load and articulatory variability in children with language and speech sound disorders, including childhood apraxia of speech.
Method
Forty-six children, ages 48–92 months, participated in the current study, including children with speech sound disorder, developmental language disorder (aka specific language impairment), childhood apraxia of speech, and typical development. Children imitated (low language load task) then retrieved (high language load task) agent + action phrases. Articulatory variability was quantified using speech kinematics. We assessed language status and speech status (typical vs. impaired) in relation to articulatory variability.
Results
All children showed increased articulatory variability in the retrieval task compared with the imitation task. However, only children with language impairment showed a disproportionate increase in articulatory variability in the retrieval task relative to peers with typical language skills.
Conclusion
Higher-level language processes affect lower-level speech motor control processes, and this relationship appears to be more strongly mediated by language than speech skill.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G1UThf
via IFTTT

An Eye-Tracking Study of Receptive Verb Knowledge in Toddlers

Purpose
We examined receptive verb knowledge in 22- to 24-month-old toddlers with a dynamic video eye-tracking test. The primary goal of the study was to examine the utility of eye-gaze measures that are commonly used to study noun knowledge for studying verb knowledge.
Method
Forty typically developing toddlers participated. They viewed 2 videos side by side (e.g., girl clapping, same girl stretching) and were asked to find one of them (e.g., “Where is she clapping?”). Their eye-gaze, recorded by a Tobii T60XL eye-tracking system, was analyzed as a measure of their knowledge of the verb meanings. Noun trials were included as controls. We examined correlations between eye-gaze measures and score on the MacArthur–Bates Communicative Development Inventories (CDI; Fenson et al., 1994), a standard parent report measure of expressive vocabulary to see how well various eye-gaze measures predicted CDI score.
Results
A common measure of knowledge—a 15% increase in looking time to the target video from a baseline phase to the test phase—did correlate with CDI score but operationalized differently for verbs than for nouns. A 2nd common measure, latency of 1st look to the target, correlated with CDI score for nouns, as in previous work, but did not for verbs. A 3rd measure, fixation density, correlated for both nouns and verbs, although the correlation went in different directions.
Conclusions
The dynamic nature of videos depicting verb knowledge results in differences in eye-gaze as compared to static images depicting nouns. An eye-tracking assessment of verb knowledge is worthwhile to develop. However, the particular dependent measures used may be different than those used for static images and nouns.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rkGZfG
via IFTTT

The Relationship Between Non-Orthographic Language Abilities and Reading Performance in Chronic Aphasia: An Exploration of the Primary Systems Hypothesis

Purpose
This study investigated the relationship between non-orthographic language abilities and reading in order to examine assumptions of the primary systems hypothesis and further our understanding of language processing poststroke.
Method
Performance on non-orthographic semantic, phonologic, and syntactic tasks, as well as oral reading and reading comprehension tasks, was assessed in 43 individuals with aphasia. Correlation and regression analyses were conducted to determine the relationship between these measures. In addition, analyses of variance examined differences within and between reading groups (within normal limits, phonological, deep, or global alexia).
Results
Results showed that non-orthographic language abilities were significantly related to reading abilities. Semantics was most predictive of regular and irregular word reading, whereas phonology was most predictive of pseudohomophone and nonword reading. Written word and paragraph comprehension were primarily supported by semantics, whereas written sentence comprehension was related to semantic, phonologic, and syntactic performance. Finally, severity of alexia was found to reflect severity of semantic and phonologic impairment.
Conclusions
Findings support the primary systems view of language by showing that non-orthographic language abilities and reading abilities are closely linked. This preliminary work requires replication and extension; however, current results highlight the importance of routine, integrated assessment and treatment of spoken and written language in aphasia.
Supplemental Material
https://doi.org/10.23641/asha.7403963

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4LZq
via IFTTT

Modifying and Validating a Measure of Chronic Stress for People With Aphasia

Purpose
Chronic stress is likely a common experience among people with the language impairment of aphasia. Importantly, chronic stress reportedly alters the neural networks central to learning and memory—essential ingredients of aphasia rehabilitation. Before we can explore the influence of chronic stress on rehabilitation outcomes, we must be able to measure chronic stress in this population. The purpose of this study was to (a) modify a widely used measure of chronic stress (Perceived Stress Scale [PSS]; Cohen & Janicki-Deverts, 2012) to fit the communication needs of people with aphasia (PWA) and (b) validate the modified PSS (mPSS) with PWA.
Method
Following systematic modification of the PSS (with permission), 72 PWA completed the validation portion of the study. Each participant completed the mPSS, measures of depression, anxiety, and resilience, and provided a sample of the stress hormone cortisol extracted from the hair. Pearson's product–moment correlations were used to examine associations between mPSS scores and these measures. Approximately 30% of participants completed the mPSS 1 week later to establish test–retest reliability, analyzed using an interclass correlation coefficient.
Results
Significant positive correlations were evident between the reports of chronic stress and depression and anxiety. In addition, a significant inverse correlation was found between reports of chronic stress and resilience. The mPSS also showed evidence of test–retest reliability. No association was found between mPSS score and cortisol level.
Conclusion
Although questions remain about the biological correlates of chronic stress in people with poststroke aphasia, significant associations between chronic stress and several psychosocial variables provide evidence of validity of this emerging measure of chronic stress.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G005SG
via IFTTT

Sensitivity to Morphosyntactic Information in Preschool Children With and Without Developmental Language Disorder: A Follow-Up Study

Purpose
This study tested children's sensitivity to tense/agreement information in fronted auxiliaries during online comprehension of questions (e.g., Are the nice little dogs running?). Data from children with developmental language disorder (DLD) were compared to previously published data from typically developing (TD) children matched according to sentence comprehension test scores.
Method
Fifteen 5-year-old children with DLD and fifteen 3-year-old TD children participated in a looking-while-listening task. Children viewed pairs of pictures, 1 with a single agent and 1 with multiple agents, accompanied by a sentence with a fronted auxiliary (is + single agent or are + two agents) or a control sentence. Proportion looking to the target was measured.
Results
Children with DLD did not show anticipatory looking based on the number information contained in the auxiliary (is or are) as the younger TD children had. Both groups showed significant increases in looking to the target upon hearing the subject noun (e.g., dogs).
Conclusions
Despite the groups' similar sentence comprehension abilities and ability to accurately respond to the information provided by the subject noun, children with DLD did not show sensitivity to number information on the fronted auxiliary. This insensitivity is considered in light of these children's weaker command of tense/agreement forms in their speech. Specifically, we consider the possibility that failure to grasp the relation between the subject–verb sequence (e.g., dogs running) and preceding information (e.g., are) in questions in the input contributes to the protracted inconsistency in producing auxiliary forms in obligatory contexts by children with DLD.
Supplemental Material
https://doi.org/10.23641/asha.7283459

from #Audiology via ola Kala on Inoreader https://ift.tt/2OKCrbL
via IFTTT

Structural Relationship Between Cognitive Processing and Syntactic Sentence Comprehension in Children With and Without Developmental Language Disorder

Purpose
We assessed the potential direct and indirect (mediated) influences of 4 cognitive mechanisms we believe are theoretically relevant to canonical and noncanonical sentence comprehension of school-age children with and without developmental language disorder (DLD).
Method
One hundred seventeen children with DLD and 117 propensity-matched typically developing (TD) children participated. Comprehension was indexed by children identifying the agent in implausible sentences. Children completed cognitive tasks indexing the latent predictors of fluid reasoning (FLD-R), controlled attention (CATT), complex working memory (cWM), and long-term memory language knowledge (LTM-LK).
Results
Structural equation modeling revealed that the best model fit was an indirect model in which cWM mediated the relationship among FLD-R, CATT, LTM-LK, and sentence comprehension. For TD children, comprehension of both sentence types was indirectly influenced by FLD-R (pattern recognition) and LTM-LK (linguistic chunking). For children with DLD, canonical sentence comprehension was indirectly influenced by LTM-LK and CATT, and noncanonical comprehension was indirectly influenced just by CATT.
Conclusions
cWM mediates sentence comprehension in children with DLD and TD children. For TD children, comprehension occurs automatically through pattern recognition and linguistic chunking. For children with DLD, comprehension is cognitively effortful. Whereas canonical comprehension occurs through chunking, noncanonical comprehension develops on a word-by-word basis.
Supplemental Material
https://doi.org/10.23641/asha.7178939

from #Audiology via ola Kala on Inoreader https://ift.tt/2P3IWvi
via IFTTT

Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech

Purpose
Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input.
Method
Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions). Performance was analyzed not only in terms of traditional mean RTs but also in terms of the faster versus slower RTs (defined by the 1st vs. 3rd quartiles of RT distributions). These time regions were conceptualized respectively as reflecting optimal detection with efficient focused attention versus less optimal detection with inefficient focused attention due to attentional lapses.
Results
Mean RTs indicated better detection (a) of multisensory AV speech than A speech only in 4- to 5-year-olds and (b) of A and AV inputs than V input in all age groups. The faster RTs revealed that AV input did not improve detection in any group. The slower RTs indicated that (a) the processing of silent V input was significantly faster for the articulating than static face and (b) AV speech or facial input significantly minimized attentional lapses in all groups except 6- to 7-year-olds (a peaked U-shaped curve). Apparently, the AV benefit observed for mean performance in 4- to 5-year-olds arose from effects of attention.
Conclusions
The faster RTs indicated that AV input did not enhance detection in any group, but the slower RTs indicated that AV speech and dynamic V speech (mouthing) significantly minimized attentional lapses and thus did influence performance. Overall, A and AV inputs were detected consistently faster than V input; this result endorsed stimulus-bound auditory processing by these children.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rk5Y2H
via IFTTT

Relations Between Teacher Talk Characteristics and Child Language in Spoken-Language Deaf and Hard-of-Hearing Classrooms

Purpose
The aim of this study was to examine relations between teachers' conversational techniques and language gains made by their deaf and hard-of-hearing students. Specifically, we considered teachers' reformulations of child utterances, language elicitations, explicit vocabulary and syntax instruction, and wait time.
Method
This was an observational, longitudinal study that examined the characteristics of teacher talk in 25 kindergarten through second-grade classrooms of 68 deaf and hard-of-hearing children who used spoken English. Standardized assessments provided measures of child vocabulary and morphosyntax in the fall and spring of a school year. Characteristics of teacher talk were coded from classroom video recordings during the winter of that year.
Results
Hierarchical linear modeling indicated that reformulating child statements and explicitly teaching vocabulary were significant predictors of child vocabulary gains across a school year. Explicitly teaching vocabulary also significantly predicted gains in morphosyntax abilities. There were wide individual differences in the teachers' use of these conversational techniques.
Conclusion
Reformulation and explicit vocabulary instruction may be areas where training can help teachers improve, and improvements in the teachers' talk may benefit their students.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Fg1IeK
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader https://ift.tt/2L8oxjG
via IFTTT