Πέμπτη 17 Αυγούστου 2017

Beyond Sentences: Using the Expression, Reception, and Recall of Narratives Instrument to Assess Communication in School-Aged Children With Autism Spectrum Disorder

Purpose
Impairments in the social use of language are universal in autism spectrum disorder (ASD), but few standardized measures evaluate communication skills above the level of individual words or sentences. This study evaluated the Expression, Reception, and Recall of Narrative Instrument (ERRNI; Bishop, 2004) to determine its contribution to assessing language and communicative impairment beyond the sentence level in children with ASD.
Method
A battery of assessments, including measures of cognition, language, pragmatics, severity of autism symptoms, and adaptive functioning, was administered to 74 8- to 9-year-old intellectually able children with ASD.
Results
Average performance on the ERRNI was significantly poorer than on the Clinical Evaluation of Language Fundamentals–Fourth Edition (CELF-4). In addition, ERRNI scores reflecting the number and quality of relevant story components included in the participants' narratives were significantly positively related to scores on measures of nonverbal cognitive skill, language, and everyday adaptive communication, and significantly negatively correlated with the severity of affective autism symptoms.
Conclusion
Results suggest that the ERRNI reveals discourse impairments that may not be identified by measures that focus on individual words and sentences. Overall, the ERRNI provides a useful measure of communicative skill beyond the sentence level in school-aged children with ASD.

from #Audiology via ola Kala on Inoreader http://article/60/8/2228/2648607/Beyond-Sentences-Using-the-Expression-Reception
via IFTTT

Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children

Purpose
The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children.
Method
P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years.
Results
The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise.
Conclusions
Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children.
Supplemental materials
http://ift.tt/2uSYZQZ

from #Audiology via ola Kala on Inoreader http://article/60/8/2297/2647677/Noise-Equally-Degrades-Central-Auditory-Processing
via IFTTT

Executive Functions Impact the Relation Between Respiratory Sinus Arrhythmia and Frequency of Stuttering in Young Children Who Do and Do Not Stutter

Purpose
This study sought to determine whether respiratory sinus arrhythmia (RSA) and executive functions are associated with stuttered speech disfluencies of young children who do (CWS) and do not stutter (CWNS).
Method
Thirty-six young CWS and 36 CWNS were exposed to neutral, negative, and positive emotion-inducing video clips, followed by their participation in speaking tasks. During the neutral video, we measured baseline RSA, a physiological index of emotion regulation, and during video viewing and speaking, we measured RSA change from baseline, a physiological index of regulatory responses during challenge. Participants' caregivers completed the Children's Behavior Questionnaire from which a composite score of the inhibitory control and attentional focusing subscales served to index executive functioning.
Results
For both CWS and CWNS, greater decrease of RSA during both video viewing and speaking was associated with more stuttering. During speaking, CWS with lower executive functioning exhibited a negative association between RSA change and stuttering; conversely, CWNS with higher executive functioning exhibited a negative association between RSA change and stuttering.
Conclusion
Findings suggest that decreased RSA during video viewing and speaking is associated with increased stuttering and young CWS differ from CWNS in terms of how their executive functions moderate the relation between RSA change and stuttered disfluencies.

from #Audiology via ola Kala on Inoreader http://article/60/8/2133/2647676/Executive-Functions-Impact-the-Relation-Between
via IFTTT

An Exploration of the Associations Among Hearing Loss, Physical Health, and Visual Memory in Adults From West Central Alabama

Purpose
The purpose of this preliminary study was to explore the associations among hearing loss, physical health, and visual memory in adults living in rural areas, urban clusters, and an urban city in west Central Alabama.
Method
Two hundred ninety-seven adults (182 women, 115 men) from rural areas, urban clusters, and an urban city of west Central Alabama completed a hearing assessment, a physical health questionnaire, a hearing handicap measure, and a visual memory test.
Results
A greater number of adults with hearing loss lived in rural areas and urban clusters than in an urban area. In addition, poorer physical health was significantly associated with hearing loss. A greater number of individuals with poor physical health who lived in rural towns and urban clusters had hearing loss compared with the adults with other physical health issues who lived in an urban city. Poorer hearing sensitivity resulted in poorer outcomes on the Emotional and Social subscales of the Hearing Handicap Inventory for Adults. And last, visual memory, a working-memory task, was not associated with hearing loss but was associated with educational level.
Conclusions
The outcomes suggest that hearing loss is associated with poor physical and emotional health but not with visual-memory skills. A greater number of adults living in rural areas experienced hearing loss compared with adults living in an urban city, and consequently, further research will be necessary to confirm this relationship and to explore the reasons behind it. Also, further exploration of the relationship between cognition and hearing loss in adults living in rural and urban areas will be needed.

from #Audiology via ola Kala on Inoreader http://article/60/8/2346/2648885/An-Exploration-of-the-Associations-Among-Hearing
via IFTTT

Effects of Lexical and Somatosensory Feedback on Long-Term Improvements in Intelligibility of Dysarthric Speech

Purpose
Intelligibility improvements immediately following perceptual training with dysarthric speech using lexical feedback are comparable to those observed when training uses somatosensory feedback (Borrie & Schäfer, 2015). In this study, we investigated if these lexical and somatosensory guided improvements in listener intelligibility of dysarthric speech remain comparable and stable over the course of 1 month.
Method
Following an intelligibility pretest, 60 participants were trained with dysarthric speech stimuli under one of three conditions: lexical feedback, somatosensory feedback, or no training (control). Participants then completed a series of intelligibility posttests, which took place immediately (immediate posttest), 1 week (1-week posttest) following training, and 1 month (1-month posttest) following training.
Results
As per our previous study, intelligibility improvements at immediate posttest were equivalent between lexical and somatosensory feedback conditions. Condition differences, however, emerged over time. Improvements guided by lexical feedback deteriorated over the month whereas those guided by somatosensory feedback remained robust.
Conclusions
Somatosensory feedback, internally generated by vocal imitation, may be required to affect long-term perceptual gain in processing dysarthric speech. Findings are discussed in relation to underlying learning mechanisms and offer insight into how externally and internally generated feedback may differentially affect perceptual learning of disordered speech.

from #Audiology via ola Kala on Inoreader http://article/60/8/2151/2643504/Effects-of-Lexical-and-Somatosensory-Feedback-on
via IFTTT

Judgments of Emotion in Clear and Conversational Speech by Young Adults With Normal Hearing and Older Adults With Hearing Impairment

Purpose
In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style.
Method
The first experiment included 18 YNH listeners, and the second included 10 additional YNH listeners along with 20 OHI listeners. Participants heard sentences spoken conversationally and clearly. Participants selected the emotion they heard in the talker's voice using a 6-alternative, forced-choice paradigm.
Results
Clear speech was judged as sounding angry and disgusted more often and happy, fearful, sad, and neutral less often than conversational speech. Talkers whose clear speech was judged to be particularly clear were also judged as sounding angry more often and fearful less often than other talkers. OHI listeners reported hearing anger less often than YNH listeners; however, they still judged clear speech as angry more often than conversational speech.
Conclusions
Speech spoken clearly may sound angry more often than speech spoken conversationally. Although perceived emotion varied between YNH and OHI listeners, judgments of anger were higher for clear speech than conversational speech for both listener groups.
Supplemental Materials
http://ift.tt/2sQO99N

from #Audiology via ola Kala on Inoreader http://article/60/8/2271/2643501/Judgments-of-Emotion-in-Clear-and-Conversational
via IFTTT

Glottal Aerodynamic Measures in Women With Phonotraumatic and Nonphonotraumatic Vocal Hyperfunction

Purpose
The purpose of this study was to determine the validity of preliminary reports showing that glottal aerodynamic measures can identify pathophysiological phonatory mechanisms for phonotraumatic and nonphonotraumatic vocal hyperfunction, which are each distinctly different from normal vocal function.
Method
Glottal aerodynamic measures (estimates of subglottal air pressure, peak-to-peak airflow, maximum flow declination rate, and open quotient) were obtained noninvasively using a pneumotachograph mask with an intraoral pressure catheter in 16 women with organic vocal fold lesions, 16 women with muscle tension dysphonia, and 2 associated matched control groups with normal voices. Subjects produced /pae/ syllable strings from which glottal airflow was estimated using inverse filtering during /ae/ vowels, and subglottal pressure was estimated during /p/ closures. All measures were normalized for sound pressure level (SPL) and statistically tested for differences between patient and control groups.
Results
All SPL-normalized measures were significantly lower in the phonotraumatic group as compared with measures in its control group. For the nonphonotraumatic group, only SPL-normalized subglottal pressure and open quotient were significantly lower than measures in its control group.
Conclusions
Results of this study confirm previous hypotheses and preliminary results indicating that SPL-normalized estimates of glottal aerodynamic measures can be used to describe the different pathophysiological phonatory mechanisms associated with phonotraumatic and nonphonotraumatic vocal hyperfunction.

from #Audiology via ola Kala on Inoreader http://article/60/8/2159/2648608/Glottal-Aerodynamic-Measures-in-Women-With
via IFTTT

Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

Purpose
We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes.
Method
Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes.
Results
Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors.
Conclusion
Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants.
Supplemental materials
http://ift.tt/2tHGBXk

from #Audiology via ola Kala on Inoreader http://article/60/8/2321/2645734/Early-Postimplant-Speech-Perception-and-Language
via IFTTT

Applying an Integrative Framework of Executive Function to Preschoolers With Specific Language Impairment

Purpose
The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested within a developmental integrative framework of executive function.
Method
This study included 26 4- and 5-year-olds diagnosed with SLI and 26 typically developing age- and sex-matched peers. Participants were tested on verbal and nonverbal measures of sustained selective attention, working memory, inhibition, and shifting.
Results
The SLI group performed worse compared with typically developing children on both verbal and nonverbal measures of sustained selective attention and working memory, the verbal inhibition task, and the nonverbal shifting task. Comparisons of standardized group differences between executive function measures revealed a linear increase with the following order: working memory, inhibition, shifting, and sustained selective attention.
Conclusion
The pattern of results suggests that preschoolers with SLI have deficits in executive functioning compared with typical peers, and deficits are not limited to verbal tasks. A significant linear relationship between group differences across executive function components supports the possibility of a hierarchical relationship between executive function skills.

from #Audiology via ola Kala on Inoreader http://article/60/8/2170/2645739/Applying-an-Integrative-Framework-of-Executive
via IFTTT

Electrophysiological Evidence for the Sources of the Masking Level Difference

Purpose
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD).
Method
A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD.
Results
Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD.
Conclusions
The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.

from #Audiology via ola Kala on Inoreader http://article/60/8/2364/2646849/Electrophysiological-Evidence-for-the-Sources-of
via IFTTT

Identifying the Dimensionality of Oral Language Skills of Children With Typical Development in Preschool Through Fifth Grade

Purpose
Language is a multidimensional construct from prior to the beginning of formal schooling to near the end of elementary school. The primary goals of this study were to identify the dimensionality of language and to determine whether this dimensionality was consistent in children with typical language development from preschool through 5th grade.
Method
In a large sample of 1,895 children, confirmatory factor analysis was conducted with 19–20 measures of language intended to represent 6 factors, including domains of vocabulary and syntax/grammar across modalities of expressive and receptive language, listening comprehension, and vocabulary depth.
Results
A 2-factor model with separate, highly correlated vocabulary and syntax factors provided the best fit to the data, and this model of language dimensionality was consistent from preschool through 5th grade.
Conclusion
This study found that there are fewer dimensions than are often suggested or represented by the myriad subtests in commonly used standardized tests of language. The identified 2-dimensional (vocabulary and syntax) model of language has significant implications for the conceptualization and measurement of the language skills of children in the age range from preschool to 5th grade, including the study of typical and atypical language development, the study of the developmental and educational influences of language, and classification and intervention in clinical practice.
Supplemental Materials
http://ift.tt/2uEshUx

from #Audiology via ola Kala on Inoreader http://article/60/8/2185/2644885/Identifying-the-Dimensionality-of-Oral-Language
via IFTTT

Influences of Phonological Context on Tense Marking in Spanish–English Dual Language Learners

Purpose
The emergence of tense-morpheme marking during language acquisition is highly variable, which confounds the use of tense marking as a diagnostic indicator of language impairment in linguistically diverse populations. In this study, we seek to better understand tense-marking patterns in young bilingual children by comparing phonological influences on marking of 2 word-final tense morphemes.
Method
In spontaneous connected speech samples from 10 Spanish–English dual language learners aged 56–66 months (M = 61.7, SD = 3.4), we examined marking rates of past tense -ed and third person singular -s morphemes in different environments, using multiple measures of phonological context.
Results
Both morphemes were found to exhibit notably contrastive marking patterns in some contexts. Each was most sensitive to a different combination of phonological influences in the verb stem and the following word.
Conclusions
These findings extend existing evidence from monolingual speakers for the influence of word-final phonological context on morpheme production to a bilingual population. Further, novel findings not yet attested in previous research support an expanded consideration of phonological context in clinical decision making and future research related to word-final morphology.

from #Audiology via ola Kala on Inoreader http://article/60/8/2199/2646850/Influences-of-Phonological-Context-on-Tense
via IFTTT

Vocabulary Facilitates Speech Perception in Children With Hearing Aids

Purpose
We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs.
Method
Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5–12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli.
Results
Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs.
Conclusions
Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH.

from #Audiology via ola Kala on Inoreader http://article/60/8/2281/2646497/Vocabulary-Facilitates-Speech-Perception-in
via IFTTT

Normative Study of the Functional Assessment of Verbal Reasoning and Executive Strategies (FAVRES) Test in the French-Canadian Population

Purpose
The Functional Assessment of Verbal Reasoning and Executive Strategies (FAVRES; MacDonald, 2005) test was designed for use by speech-language pathologists to assess verbal reasoning, complex comprehension, discourse, and executive skills during performance on a set of challenging and ecologically valid functional tasks. A recent French version of this test was translated from English; however, it had not undergone standardization. The development of normative data that are linguistically and culturally sensitive to the target population is of importance. The present study aimed to establish normative data for the French version of the FAVRES, a commonly used test with native French–speaking patients with traumatic brain injury in Québec, Canada.
Method
The normative sample consisted of 181 healthy French-speaking adults from various regions across the province of Québec. Age and years of education were factored into the normative model.
Results
Results indicate that age was significantly associated with performance on time, accuracy, reasoning subskills, and rationale criteria, whereas the level of education was significantly associated with accuracy and rationale.
Conclusion
Overall, mean scores on each criterion were relatively lower than in the original English version, which reinforces the importance of using the present normative data when interpreting performance of French speakers who have sustained a traumatic brain injury.

from #Audiology via ola Kala on Inoreader http://article/60/8/2217/2648887/Normative-Study-of-the-Functional-Assessment-of
via IFTTT

Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

Purpose
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues.
Method
Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions.
Results
A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect.
Conclusion
The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

from #Audiology via ola Kala on Inoreader http://article/60/8/2310/2646629/Working-Memory-and-Speech-Recognition-in-Noise
via IFTTT

Shortened Nonword Repetition Task (NWR-S): A Simple, Quick, and Less Expensive Outcome to Identify Children With Combined Specific Language and Reading Impairment

Purpose
The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S).
Method
NWR-S and NWR performance were compared in the previously published data set of Rispens and Baker (2012; N = 88), who compared NWR performance in 5 participant groups: specific language impairment (SLI), reading impairment (RI), both SLI and RI, one control group matched on chronological age, and one control group matched on language age.
Results
Analyses of variance showed that children with SLI + RI performed significantly worse than other participant groups in NWR-S, just as in NWR. Logistic regression analyses showed that both tasks can predict an SLI + RI outcome. NWR-S holds a sensitivity of 82.6% and a specificity of 95.4% in identifying children with SLI + RI. The sensitivity of the original NWR is 87.0% with a specificity of 87.7%.
Conclusions
As the original NWR, the NWR-S comprising a subset of 22 nonwords scored with a simplified scoring system can identify children with combined SLI and RI while saving a significant amount of the needed assessment time.
Supplemental Materials
http://ift.tt/2vdqx0S

from #Audiology via ola Kala on Inoreader http://article/60/8/2241/2644493/Shortened-Nonword-Repetition-Task-NWRS-A-Simple
via IFTTT

Auditory Training for Adults Who Have Hearing Loss: A Comparison of Spaced Versus Massed Practice Schedules

Purpose
The spacing effect in human memory research refers to situations in which people learn items better when they study items in spaced intervals rather than massed intervals. This investigation was conducted to compare the efficacy of meaning-oriented auditory training when administered with a spaced versus massed practice schedule.
Method
Forty-seven adult hearing aid users received 16 hr of auditory training. Participants in a spaced group (mean age = 64.6 years, SD = 14.7) trained twice per week, and participants in a massed group (mean age = 69.6 years, SD = 17.5) trained for 5 consecutive days each week. Participants completed speech perception tests before training, immediately following training, and then 3 months later. In line with transfer appropriate processing theory, tests assessed both trained tasks and an untrained task.
Results
Auditory training improved the speech recognition performance of participants in both groups. Benefits were maintained for 3 months. No effect of practice schedule was found on overall benefits achieved, on retention of benefits, nor on generalizability of benefits to nontrained tasks.
Conclusion
The lack of spacing effect in otherwise effective auditory training suggests that perceptual learning may be subject to different influences than are other types of learning, such as vocabulary learning. Hence, clinicians might have latitude in recommending training schedules to accommodate patients' schedules.

from #Audiology via ola Kala on Inoreader http://article/60/8/2337/2648749/Auditory-Training-for-Adults-Who-Have-Hearing-Loss
via IFTTT

Visuospatial and Verbal Short-Term Memory Correlates of Vocabulary Ability in Preschool Children

Background
Recent studies indicate that school-age children's patterns of performance on measures of verbal and visuospatial short-term memory (STM) and working memory (WM) differ across types of neurodevelopmental disorders. Because these disorders are often characterized by early language delay, administering STM and WM tests to toddlers could improve prediction of neurodevelopmental outcomes. Toddler-appropriate verbal, but not visuospatial, STM and WM tasks are available. A toddler-appropriate visuospatial STM test is introduced.
Method
Tests of verbal STM, visuospatial STM, expressive vocabulary, and receptive vocabulary were administered to 92 English-speaking children aged 2–5 years.
Results
Mean test scores did not differ for boys and girls. Visuospatial and verbal STM scores were not significantly correlated when age was partialed out. Age, visuospatial STM scores, and verbal STM scores accounted for unique variance in expressive (51%, 3%, and 4%, respectively) and receptive vocabulary scores (53%, 5%, and 2%, respectively) in multiple regression analyses.
Conclusion
Replication studies, a fuller test battery comprising visuospatial and verbal STM and WM tests, and a general intelligence test are required before exploring the usefulness of these STM tests for predicting longitudinal outcomes. The lack of an association between the STM tests suggests that the instruments have face validity and test independent STM skills.

from #Audiology via ola Kala on Inoreader http://article/60/8/2249/2648886/Visuospatial-and-Verbal-ShortTerm-Memory
via IFTTT

Speech Understanding in Noise by Patients With Cochlear Implants Using a Monaural Adaptive Beamformer

Purpose
The aim of this experiment was to compare, for patients with cochlear implants (CIs), the improvement for speech understanding in noise provided by a monaural adaptive beamformer and for two interventions that produced bilateral input (i.e., bilateral CIs and hearing preservation [HP] surgery).
Method
Speech understanding scores for sentences were obtained for 10 listeners fit with a single CI. The listeners were tested with and without beamformer activated in a “cocktail party” environment with spatially separated target and maskers. Data for 10 listeners with bilateral CIs and 8 listeners with HP CIs were taken from Loiselle, Dorman, Yost, Cook, and Gifford (2016), who used the same test protocol.
Results
The use of the beamformer resulted in a 31 percentage point improvement in performance; in bilateral CIs, an 18 percentage point improvement; and in HP CIs, a 20 percentage point improvement.
Conclusion
A monaural adaptive beamformer can produce an improvement in speech understanding in a complex noise environment that is equal to, or greater than, the improvement produced by bilateral CIs and HP surgery.

from #Audiology via ola Kala on Inoreader http://article/60/8/2360/2647807/Speech-Understanding-in-Noise-by-Patients-With
via IFTTT

Beyond Sentences: Using the Expression, Reception, and Recall of Narratives Instrument to Assess Communication in School-Aged Children With Autism Spectrum Disorder

Purpose
Impairments in the social use of language are universal in autism spectrum disorder (ASD), but few standardized measures evaluate communication skills above the level of individual words or sentences. This study evaluated the Expression, Reception, and Recall of Narrative Instrument (ERRNI; Bishop, 2004) to determine its contribution to assessing language and communicative impairment beyond the sentence level in children with ASD.
Method
A battery of assessments, including measures of cognition, language, pragmatics, severity of autism symptoms, and adaptive functioning, was administered to 74 8- to 9-year-old intellectually able children with ASD.
Results
Average performance on the ERRNI was significantly poorer than on the Clinical Evaluation of Language Fundamentals–Fourth Edition (CELF-4). In addition, ERRNI scores reflecting the number and quality of relevant story components included in the participants' narratives were significantly positively related to scores on measures of nonverbal cognitive skill, language, and everyday adaptive communication, and significantly negatively correlated with the severity of affective autism symptoms.
Conclusion
Results suggest that the ERRNI reveals discourse impairments that may not be identified by measures that focus on individual words and sentences. Overall, the ERRNI provides a useful measure of communicative skill beyond the sentence level in school-aged children with ASD.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2228/2648607/Beyond-Sentences-Using-the-Expression-Reception
via IFTTT

Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children

Purpose
The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children.
Method
P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years.
Results
The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise.
Conclusions
Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children.
Supplemental materials
http://ift.tt/2uSYZQZ

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2297/2647677/Noise-Equally-Degrades-Central-Auditory-Processing
via IFTTT

Executive Functions Impact the Relation Between Respiratory Sinus Arrhythmia and Frequency of Stuttering in Young Children Who Do and Do Not Stutter

Purpose
This study sought to determine whether respiratory sinus arrhythmia (RSA) and executive functions are associated with stuttered speech disfluencies of young children who do (CWS) and do not stutter (CWNS).
Method
Thirty-six young CWS and 36 CWNS were exposed to neutral, negative, and positive emotion-inducing video clips, followed by their participation in speaking tasks. During the neutral video, we measured baseline RSA, a physiological index of emotion regulation, and during video viewing and speaking, we measured RSA change from baseline, a physiological index of regulatory responses during challenge. Participants' caregivers completed the Children's Behavior Questionnaire from which a composite score of the inhibitory control and attentional focusing subscales served to index executive functioning.
Results
For both CWS and CWNS, greater decrease of RSA during both video viewing and speaking was associated with more stuttering. During speaking, CWS with lower executive functioning exhibited a negative association between RSA change and stuttering; conversely, CWNS with higher executive functioning exhibited a negative association between RSA change and stuttering.
Conclusion
Findings suggest that decreased RSA during video viewing and speaking is associated with increased stuttering and young CWS differ from CWNS in terms of how their executive functions moderate the relation between RSA change and stuttered disfluencies.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2133/2647676/Executive-Functions-Impact-the-Relation-Between
via IFTTT

An Exploration of the Associations Among Hearing Loss, Physical Health, and Visual Memory in Adults From West Central Alabama

Purpose
The purpose of this preliminary study was to explore the associations among hearing loss, physical health, and visual memory in adults living in rural areas, urban clusters, and an urban city in west Central Alabama.
Method
Two hundred ninety-seven adults (182 women, 115 men) from rural areas, urban clusters, and an urban city of west Central Alabama completed a hearing assessment, a physical health questionnaire, a hearing handicap measure, and a visual memory test.
Results
A greater number of adults with hearing loss lived in rural areas and urban clusters than in an urban area. In addition, poorer physical health was significantly associated with hearing loss. A greater number of individuals with poor physical health who lived in rural towns and urban clusters had hearing loss compared with the adults with other physical health issues who lived in an urban city. Poorer hearing sensitivity resulted in poorer outcomes on the Emotional and Social subscales of the Hearing Handicap Inventory for Adults. And last, visual memory, a working-memory task, was not associated with hearing loss but was associated with educational level.
Conclusions
The outcomes suggest that hearing loss is associated with poor physical and emotional health but not with visual-memory skills. A greater number of adults living in rural areas experienced hearing loss compared with adults living in an urban city, and consequently, further research will be necessary to confirm this relationship and to explore the reasons behind it. Also, further exploration of the relationship between cognition and hearing loss in adults living in rural and urban areas will be needed.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2346/2648885/An-Exploration-of-the-Associations-Among-Hearing
via IFTTT

Effects of Lexical and Somatosensory Feedback on Long-Term Improvements in Intelligibility of Dysarthric Speech

Purpose
Intelligibility improvements immediately following perceptual training with dysarthric speech using lexical feedback are comparable to those observed when training uses somatosensory feedback (Borrie & Schäfer, 2015). In this study, we investigated if these lexical and somatosensory guided improvements in listener intelligibility of dysarthric speech remain comparable and stable over the course of 1 month.
Method
Following an intelligibility pretest, 60 participants were trained with dysarthric speech stimuli under one of three conditions: lexical feedback, somatosensory feedback, or no training (control). Participants then completed a series of intelligibility posttests, which took place immediately (immediate posttest), 1 week (1-week posttest) following training, and 1 month (1-month posttest) following training.
Results
As per our previous study, intelligibility improvements at immediate posttest were equivalent between lexical and somatosensory feedback conditions. Condition differences, however, emerged over time. Improvements guided by lexical feedback deteriorated over the month whereas those guided by somatosensory feedback remained robust.
Conclusions
Somatosensory feedback, internally generated by vocal imitation, may be required to affect long-term perceptual gain in processing dysarthric speech. Findings are discussed in relation to underlying learning mechanisms and offer insight into how externally and internally generated feedback may differentially affect perceptual learning of disordered speech.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2151/2643504/Effects-of-Lexical-and-Somatosensory-Feedback-on
via IFTTT

Judgments of Emotion in Clear and Conversational Speech by Young Adults With Normal Hearing and Older Adults With Hearing Impairment

Purpose
In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style.
Method
The first experiment included 18 YNH listeners, and the second included 10 additional YNH listeners along with 20 OHI listeners. Participants heard sentences spoken conversationally and clearly. Participants selected the emotion they heard in the talker's voice using a 6-alternative, forced-choice paradigm.
Results
Clear speech was judged as sounding angry and disgusted more often and happy, fearful, sad, and neutral less often than conversational speech. Talkers whose clear speech was judged to be particularly clear were also judged as sounding angry more often and fearful less often than other talkers. OHI listeners reported hearing anger less often than YNH listeners; however, they still judged clear speech as angry more often than conversational speech.
Conclusions
Speech spoken clearly may sound angry more often than speech spoken conversationally. Although perceived emotion varied between YNH and OHI listeners, judgments of anger were higher for clear speech than conversational speech for both listener groups.
Supplemental Materials
http://ift.tt/2sQO99N

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2271/2643501/Judgments-of-Emotion-in-Clear-and-Conversational
via IFTTT

Glottal Aerodynamic Measures in Women With Phonotraumatic and Nonphonotraumatic Vocal Hyperfunction

Purpose
The purpose of this study was to determine the validity of preliminary reports showing that glottal aerodynamic measures can identify pathophysiological phonatory mechanisms for phonotraumatic and nonphonotraumatic vocal hyperfunction, which are each distinctly different from normal vocal function.
Method
Glottal aerodynamic measures (estimates of subglottal air pressure, peak-to-peak airflow, maximum flow declination rate, and open quotient) were obtained noninvasively using a pneumotachograph mask with an intraoral pressure catheter in 16 women with organic vocal fold lesions, 16 women with muscle tension dysphonia, and 2 associated matched control groups with normal voices. Subjects produced /pae/ syllable strings from which glottal airflow was estimated using inverse filtering during /ae/ vowels, and subglottal pressure was estimated during /p/ closures. All measures were normalized for sound pressure level (SPL) and statistically tested for differences between patient and control groups.
Results
All SPL-normalized measures were significantly lower in the phonotraumatic group as compared with measures in its control group. For the nonphonotraumatic group, only SPL-normalized subglottal pressure and open quotient were significantly lower than measures in its control group.
Conclusions
Results of this study confirm previous hypotheses and preliminary results indicating that SPL-normalized estimates of glottal aerodynamic measures can be used to describe the different pathophysiological phonatory mechanisms associated with phonotraumatic and nonphonotraumatic vocal hyperfunction.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2159/2648608/Glottal-Aerodynamic-Measures-in-Women-With
via IFTTT

Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

Purpose
We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes.
Method
Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes.
Results
Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors.
Conclusion
Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants.
Supplemental materials
http://ift.tt/2tHGBXk

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2321/2645734/Early-Postimplant-Speech-Perception-and-Language
via IFTTT

Applying an Integrative Framework of Executive Function to Preschoolers With Specific Language Impairment

Purpose
The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested within a developmental integrative framework of executive function.
Method
This study included 26 4- and 5-year-olds diagnosed with SLI and 26 typically developing age- and sex-matched peers. Participants were tested on verbal and nonverbal measures of sustained selective attention, working memory, inhibition, and shifting.
Results
The SLI group performed worse compared with typically developing children on both verbal and nonverbal measures of sustained selective attention and working memory, the verbal inhibition task, and the nonverbal shifting task. Comparisons of standardized group differences between executive function measures revealed a linear increase with the following order: working memory, inhibition, shifting, and sustained selective attention.
Conclusion
The pattern of results suggests that preschoolers with SLI have deficits in executive functioning compared with typical peers, and deficits are not limited to verbal tasks. A significant linear relationship between group differences across executive function components supports the possibility of a hierarchical relationship between executive function skills.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2170/2645739/Applying-an-Integrative-Framework-of-Executive
via IFTTT

Electrophysiological Evidence for the Sources of the Masking Level Difference

Purpose
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD).
Method
A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD.
Results
Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD.
Conclusions
The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2364/2646849/Electrophysiological-Evidence-for-the-Sources-of
via IFTTT

Identifying the Dimensionality of Oral Language Skills of Children With Typical Development in Preschool Through Fifth Grade

Purpose
Language is a multidimensional construct from prior to the beginning of formal schooling to near the end of elementary school. The primary goals of this study were to identify the dimensionality of language and to determine whether this dimensionality was consistent in children with typical language development from preschool through 5th grade.
Method
In a large sample of 1,895 children, confirmatory factor analysis was conducted with 19–20 measures of language intended to represent 6 factors, including domains of vocabulary and syntax/grammar across modalities of expressive and receptive language, listening comprehension, and vocabulary depth.
Results
A 2-factor model with separate, highly correlated vocabulary and syntax factors provided the best fit to the data, and this model of language dimensionality was consistent from preschool through 5th grade.
Conclusion
This study found that there are fewer dimensions than are often suggested or represented by the myriad subtests in commonly used standardized tests of language. The identified 2-dimensional (vocabulary and syntax) model of language has significant implications for the conceptualization and measurement of the language skills of children in the age range from preschool to 5th grade, including the study of typical and atypical language development, the study of the developmental and educational influences of language, and classification and intervention in clinical practice.
Supplemental Materials
http://ift.tt/2uEshUx

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2185/2644885/Identifying-the-Dimensionality-of-Oral-Language
via IFTTT

Influences of Phonological Context on Tense Marking in Spanish–English Dual Language Learners

Purpose
The emergence of tense-morpheme marking during language acquisition is highly variable, which confounds the use of tense marking as a diagnostic indicator of language impairment in linguistically diverse populations. In this study, we seek to better understand tense-marking patterns in young bilingual children by comparing phonological influences on marking of 2 word-final tense morphemes.
Method
In spontaneous connected speech samples from 10 Spanish–English dual language learners aged 56–66 months (M = 61.7, SD = 3.4), we examined marking rates of past tense -ed and third person singular -s morphemes in different environments, using multiple measures of phonological context.
Results
Both morphemes were found to exhibit notably contrastive marking patterns in some contexts. Each was most sensitive to a different combination of phonological influences in the verb stem and the following word.
Conclusions
These findings extend existing evidence from monolingual speakers for the influence of word-final phonological context on morpheme production to a bilingual population. Further, novel findings not yet attested in previous research support an expanded consideration of phonological context in clinical decision making and future research related to word-final morphology.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2199/2646850/Influences-of-Phonological-Context-on-Tense
via IFTTT

Vocabulary Facilitates Speech Perception in Children With Hearing Aids

Purpose
We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs.
Method
Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5–12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli.
Results
Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs.
Conclusions
Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2281/2646497/Vocabulary-Facilitates-Speech-Perception-in
via IFTTT

Normative Study of the Functional Assessment of Verbal Reasoning and Executive Strategies (FAVRES) Test in the French-Canadian Population

Purpose
The Functional Assessment of Verbal Reasoning and Executive Strategies (FAVRES; MacDonald, 2005) test was designed for use by speech-language pathologists to assess verbal reasoning, complex comprehension, discourse, and executive skills during performance on a set of challenging and ecologically valid functional tasks. A recent French version of this test was translated from English; however, it had not undergone standardization. The development of normative data that are linguistically and culturally sensitive to the target population is of importance. The present study aimed to establish normative data for the French version of the FAVRES, a commonly used test with native French–speaking patients with traumatic brain injury in Québec, Canada.
Method
The normative sample consisted of 181 healthy French-speaking adults from various regions across the province of Québec. Age and years of education were factored into the normative model.
Results
Results indicate that age was significantly associated with performance on time, accuracy, reasoning subskills, and rationale criteria, whereas the level of education was significantly associated with accuracy and rationale.
Conclusion
Overall, mean scores on each criterion were relatively lower than in the original English version, which reinforces the importance of using the present normative data when interpreting performance of French speakers who have sustained a traumatic brain injury.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2217/2648887/Normative-Study-of-the-Functional-Assessment-of
via IFTTT

Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

Purpose
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues.
Method
Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions.
Results
A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect.
Conclusion
The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2310/2646629/Working-Memory-and-Speech-Recognition-in-Noise
via IFTTT

Shortened Nonword Repetition Task (NWR-S): A Simple, Quick, and Less Expensive Outcome to Identify Children With Combined Specific Language and Reading Impairment

Purpose
The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S).
Method
NWR-S and NWR performance were compared in the previously published data set of Rispens and Baker (2012; N = 88), who compared NWR performance in 5 participant groups: specific language impairment (SLI), reading impairment (RI), both SLI and RI, one control group matched on chronological age, and one control group matched on language age.
Results
Analyses of variance showed that children with SLI + RI performed significantly worse than other participant groups in NWR-S, just as in NWR. Logistic regression analyses showed that both tasks can predict an SLI + RI outcome. NWR-S holds a sensitivity of 82.6% and a specificity of 95.4% in identifying children with SLI + RI. The sensitivity of the original NWR is 87.0% with a specificity of 87.7%.
Conclusions
As the original NWR, the NWR-S comprising a subset of 22 nonwords scored with a simplified scoring system can identify children with combined SLI and RI while saving a significant amount of the needed assessment time.
Supplemental Materials
http://ift.tt/2vdqx0S

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2241/2644493/Shortened-Nonword-Repetition-Task-NWRS-A-Simple
via IFTTT

Auditory Training for Adults Who Have Hearing Loss: A Comparison of Spaced Versus Massed Practice Schedules

Purpose
The spacing effect in human memory research refers to situations in which people learn items better when they study items in spaced intervals rather than massed intervals. This investigation was conducted to compare the efficacy of meaning-oriented auditory training when administered with a spaced versus massed practice schedule.
Method
Forty-seven adult hearing aid users received 16 hr of auditory training. Participants in a spaced group (mean age = 64.6 years, SD = 14.7) trained twice per week, and participants in a massed group (mean age = 69.6 years, SD = 17.5) trained for 5 consecutive days each week. Participants completed speech perception tests before training, immediately following training, and then 3 months later. In line with transfer appropriate processing theory, tests assessed both trained tasks and an untrained task.
Results
Auditory training improved the speech recognition performance of participants in both groups. Benefits were maintained for 3 months. No effect of practice schedule was found on overall benefits achieved, on retention of benefits, nor on generalizability of benefits to nontrained tasks.
Conclusion
The lack of spacing effect in otherwise effective auditory training suggests that perceptual learning may be subject to different influences than are other types of learning, such as vocabulary learning. Hence, clinicians might have latitude in recommending training schedules to accommodate patients' schedules.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2337/2648749/Auditory-Training-for-Adults-Who-Have-Hearing-Loss
via IFTTT

Visuospatial and Verbal Short-Term Memory Correlates of Vocabulary Ability in Preschool Children

Background
Recent studies indicate that school-age children's patterns of performance on measures of verbal and visuospatial short-term memory (STM) and working memory (WM) differ across types of neurodevelopmental disorders. Because these disorders are often characterized by early language delay, administering STM and WM tests to toddlers could improve prediction of neurodevelopmental outcomes. Toddler-appropriate verbal, but not visuospatial, STM and WM tasks are available. A toddler-appropriate visuospatial STM test is introduced.
Method
Tests of verbal STM, visuospatial STM, expressive vocabulary, and receptive vocabulary were administered to 92 English-speaking children aged 2–5 years.
Results
Mean test scores did not differ for boys and girls. Visuospatial and verbal STM scores were not significantly correlated when age was partialed out. Age, visuospatial STM scores, and verbal STM scores accounted for unique variance in expressive (51%, 3%, and 4%, respectively) and receptive vocabulary scores (53%, 5%, and 2%, respectively) in multiple regression analyses.
Conclusion
Replication studies, a fuller test battery comprising visuospatial and verbal STM and WM tests, and a general intelligence test are required before exploring the usefulness of these STM tests for predicting longitudinal outcomes. The lack of an association between the STM tests suggests that the instruments have face validity and test independent STM skills.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2249/2648886/Visuospatial-and-Verbal-ShortTerm-Memory
via IFTTT

Speech Understanding in Noise by Patients With Cochlear Implants Using a Monaural Adaptive Beamformer

Purpose
The aim of this experiment was to compare, for patients with cochlear implants (CIs), the improvement for speech understanding in noise provided by a monaural adaptive beamformer and for two interventions that produced bilateral input (i.e., bilateral CIs and hearing preservation [HP] surgery).
Method
Speech understanding scores for sentences were obtained for 10 listeners fit with a single CI. The listeners were tested with and without beamformer activated in a “cocktail party” environment with spatially separated target and maskers. Data for 10 listeners with bilateral CIs and 8 listeners with HP CIs were taken from Loiselle, Dorman, Yost, Cook, and Gifford (2016), who used the same test protocol.
Results
The use of the beamformer resulted in a 31 percentage point improvement in performance; in bilateral CIs, an 18 percentage point improvement; and in HP CIs, a 20 percentage point improvement.
Conclusion
A monaural adaptive beamformer can produce an improvement in speech understanding in a complex noise environment that is equal to, or greater than, the improvement produced by bilateral CIs and HP surgery.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2360/2647807/Speech-Understanding-in-Noise-by-Patients-With
via IFTTT

Sensitivity to Audiovisual Temporal Asynchrony in Children With a History of Specific Language Impairment and Their Peers With Typical Development: A Replication and Follow-Up Study

Purpose
Earlier, my colleagues and I showed that children with a history of specific language impairment (H-SLI) are significantly less able to detect audiovisual asynchrony compared with children with typical development (TD; Kaganovich & Schumaker, 2014). Here, I first replicate this finding in a new group of children with H-SLI and TD and then examine a relationship among audiovisual function, attention skills, and language in a combined pool of children.
Method
The stimuli were a pure tone and an explosion-shaped figure. Stimulus onset asynchrony (SOA) varied from 0–500 ms. Children pressed 1 button for perceived synchrony and another for asynchrony. I measured the number of synchronous perceptions at each SOA and calculated children's temporal binding windows. I, then, conducted multiple regressions to determine if audiovisual processing and attention can predict language skills.
Results
As in the earlier study, children with H-SLI perceived asynchrony significantly less frequently than children with TD at SOAs of 400–500 ms. Their temporal binding windows were also larger. Temporal precision and attention predicted 23%–37% of children's language ability.
Conclusions
Audiovisual temporal processing is impaired in children with H-SLI. The degree of this impairment is a predictor of language skills. Once understood, the mechanisms underlying this deficit may become a new focus for language remediation.

from #Audiology via xlomafota13 on Inoreader http://article/60/8/2259/2644822/Sensitivity-to-Audiovisual-Temporal-Asynchrony-in
via IFTTT

Beyond Sentences: Using the Expression, Reception, and Recall of Narratives Instrument to Assess Communication in School-Aged Children With Autism Spectrum Disorder

Purpose
Impairments in the social use of language are universal in autism spectrum disorder (ASD), but few standardized measures evaluate communication skills above the level of individual words or sentences. This study evaluated the Expression, Reception, and Recall of Narrative Instrument (ERRNI; Bishop, 2004) to determine its contribution to assessing language and communicative impairment beyond the sentence level in children with ASD.
Method
A battery of assessments, including measures of cognition, language, pragmatics, severity of autism symptoms, and adaptive functioning, was administered to 74 8- to 9-year-old intellectually able children with ASD.
Results
Average performance on the ERRNI was significantly poorer than on the Clinical Evaluation of Language Fundamentals–Fourth Edition (CELF-4). In addition, ERRNI scores reflecting the number and quality of relevant story components included in the participants' narratives were significantly positively related to scores on measures of nonverbal cognitive skill, language, and everyday adaptive communication, and significantly negatively correlated with the severity of affective autism symptoms.
Conclusion
Results suggest that the ERRNI reveals discourse impairments that may not be identified by measures that focus on individual words and sentences. Overall, the ERRNI provides a useful measure of communicative skill beyond the sentence level in school-aged children with ASD.

from #Audiology via ola Kala on Inoreader http://article/60/8/2228/2648607/Beyond-Sentences-Using-the-Expression-Reception
via IFTTT

Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children

Purpose
The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children.
Method
P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years.
Results
The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise.
Conclusions
Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children.
Supplemental materials
http://ift.tt/2uSYZQZ

from #Audiology via ola Kala on Inoreader http://article/60/8/2297/2647677/Noise-Equally-Degrades-Central-Auditory-Processing
via IFTTT

Executive Functions Impact the Relation Between Respiratory Sinus Arrhythmia and Frequency of Stuttering in Young Children Who Do and Do Not Stutter

Purpose
This study sought to determine whether respiratory sinus arrhythmia (RSA) and executive functions are associated with stuttered speech disfluencies of young children who do (CWS) and do not stutter (CWNS).
Method
Thirty-six young CWS and 36 CWNS were exposed to neutral, negative, and positive emotion-inducing video clips, followed by their participation in speaking tasks. During the neutral video, we measured baseline RSA, a physiological index of emotion regulation, and during video viewing and speaking, we measured RSA change from baseline, a physiological index of regulatory responses during challenge. Participants' caregivers completed the Children's Behavior Questionnaire from which a composite score of the inhibitory control and attentional focusing subscales served to index executive functioning.
Results
For both CWS and CWNS, greater decrease of RSA during both video viewing and speaking was associated with more stuttering. During speaking, CWS with lower executive functioning exhibited a negative association between RSA change and stuttering; conversely, CWNS with higher executive functioning exhibited a negative association between RSA change and stuttering.
Conclusion
Findings suggest that decreased RSA during video viewing and speaking is associated with increased stuttering and young CWS differ from CWNS in terms of how their executive functions moderate the relation between RSA change and stuttered disfluencies.

from #Audiology via ola Kala on Inoreader http://article/60/8/2133/2647676/Executive-Functions-Impact-the-Relation-Between
via IFTTT

An Exploration of the Associations Among Hearing Loss, Physical Health, and Visual Memory in Adults From West Central Alabama

Purpose
The purpose of this preliminary study was to explore the associations among hearing loss, physical health, and visual memory in adults living in rural areas, urban clusters, and an urban city in west Central Alabama.
Method
Two hundred ninety-seven adults (182 women, 115 men) from rural areas, urban clusters, and an urban city of west Central Alabama completed a hearing assessment, a physical health questionnaire, a hearing handicap measure, and a visual memory test.
Results
A greater number of adults with hearing loss lived in rural areas and urban clusters than in an urban area. In addition, poorer physical health was significantly associated with hearing loss. A greater number of individuals with poor physical health who lived in rural towns and urban clusters had hearing loss compared with the adults with other physical health issues who lived in an urban city. Poorer hearing sensitivity resulted in poorer outcomes on the Emotional and Social subscales of the Hearing Handicap Inventory for Adults. And last, visual memory, a working-memory task, was not associated with hearing loss but was associated with educational level.
Conclusions
The outcomes suggest that hearing loss is associated with poor physical and emotional health but not with visual-memory skills. A greater number of adults living in rural areas experienced hearing loss compared with adults living in an urban city, and consequently, further research will be necessary to confirm this relationship and to explore the reasons behind it. Also, further exploration of the relationship between cognition and hearing loss in adults living in rural and urban areas will be needed.

from #Audiology via ola Kala on Inoreader http://article/60/8/2346/2648885/An-Exploration-of-the-Associations-Among-Hearing
via IFTTT

Effects of Lexical and Somatosensory Feedback on Long-Term Improvements in Intelligibility of Dysarthric Speech

Purpose
Intelligibility improvements immediately following perceptual training with dysarthric speech using lexical feedback are comparable to those observed when training uses somatosensory feedback (Borrie & Schäfer, 2015). In this study, we investigated if these lexical and somatosensory guided improvements in listener intelligibility of dysarthric speech remain comparable and stable over the course of 1 month.
Method
Following an intelligibility pretest, 60 participants were trained with dysarthric speech stimuli under one of three conditions: lexical feedback, somatosensory feedback, or no training (control). Participants then completed a series of intelligibility posttests, which took place immediately (immediate posttest), 1 week (1-week posttest) following training, and 1 month (1-month posttest) following training.
Results
As per our previous study, intelligibility improvements at immediate posttest were equivalent between lexical and somatosensory feedback conditions. Condition differences, however, emerged over time. Improvements guided by lexical feedback deteriorated over the month whereas those guided by somatosensory feedback remained robust.
Conclusions
Somatosensory feedback, internally generated by vocal imitation, may be required to affect long-term perceptual gain in processing dysarthric speech. Findings are discussed in relation to underlying learning mechanisms and offer insight into how externally and internally generated feedback may differentially affect perceptual learning of disordered speech.

from #Audiology via ola Kala on Inoreader http://article/60/8/2151/2643504/Effects-of-Lexical-and-Somatosensory-Feedback-on
via IFTTT

Judgments of Emotion in Clear and Conversational Speech by Young Adults With Normal Hearing and Older Adults With Hearing Impairment

Purpose
In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style.
Method
The first experiment included 18 YNH listeners, and the second included 10 additional YNH listeners along with 20 OHI listeners. Participants heard sentences spoken conversationally and clearly. Participants selected the emotion they heard in the talker's voice using a 6-alternative, forced-choice paradigm.
Results
Clear speech was judged as sounding angry and disgusted more often and happy, fearful, sad, and neutral less often than conversational speech. Talkers whose clear speech was judged to be particularly clear were also judged as sounding angry more often and fearful less often than other talkers. OHI listeners reported hearing anger less often than YNH listeners; however, they still judged clear speech as angry more often than conversational speech.
Conclusions
Speech spoken clearly may sound angry more often than speech spoken conversationally. Although perceived emotion varied between YNH and OHI listeners, judgments of anger were higher for clear speech than conversational speech for both listener groups.
Supplemental Materials
http://ift.tt/2sQO99N

from #Audiology via ola Kala on Inoreader http://article/60/8/2271/2643501/Judgments-of-Emotion-in-Clear-and-Conversational
via IFTTT

Glottal Aerodynamic Measures in Women With Phonotraumatic and Nonphonotraumatic Vocal Hyperfunction

Purpose
The purpose of this study was to determine the validity of preliminary reports showing that glottal aerodynamic measures can identify pathophysiological phonatory mechanisms for phonotraumatic and nonphonotraumatic vocal hyperfunction, which are each distinctly different from normal vocal function.
Method
Glottal aerodynamic measures (estimates of subglottal air pressure, peak-to-peak airflow, maximum flow declination rate, and open quotient) were obtained noninvasively using a pneumotachograph mask with an intraoral pressure catheter in 16 women with organic vocal fold lesions, 16 women with muscle tension dysphonia, and 2 associated matched control groups with normal voices. Subjects produced /pae/ syllable strings from which glottal airflow was estimated using inverse filtering during /ae/ vowels, and subglottal pressure was estimated during /p/ closures. All measures were normalized for sound pressure level (SPL) and statistically tested for differences between patient and control groups.
Results
All SPL-normalized measures were significantly lower in the phonotraumatic group as compared with measures in its control group. For the nonphonotraumatic group, only SPL-normalized subglottal pressure and open quotient were significantly lower than measures in its control group.
Conclusions
Results of this study confirm previous hypotheses and preliminary results indicating that SPL-normalized estimates of glottal aerodynamic measures can be used to describe the different pathophysiological phonatory mechanisms associated with phonotraumatic and nonphonotraumatic vocal hyperfunction.

from #Audiology via ola Kala on Inoreader http://article/60/8/2159/2648608/Glottal-Aerodynamic-Measures-in-Women-With
via IFTTT

Sensitivity to Audiovisual Temporal Asynchrony in Children With a History of Specific Language Impairment and Their Peers With Typical Development: A Replication and Follow-Up Study

Purpose
Earlier, my colleagues and I showed that children with a history of specific language impairment (H-SLI) are significantly less able to detect audiovisual asynchrony compared with children with typical development (TD; Kaganovich & Schumaker, 2014). Here, I first replicate this finding in a new group of children with H-SLI and TD and then examine a relationship among audiovisual function, attention skills, and language in a combined pool of children.
Method
The stimuli were a pure tone and an explosion-shaped figure. Stimulus onset asynchrony (SOA) varied from 0–500 ms. Children pressed 1 button for perceived synchrony and another for asynchrony. I measured the number of synchronous perceptions at each SOA and calculated children's temporal binding windows. I, then, conducted multiple regressions to determine if audiovisual processing and attention can predict language skills.
Results
As in the earlier study, children with H-SLI perceived asynchrony significantly less frequently than children with TD at SOAs of 400–500 ms. Their temporal binding windows were also larger. Temporal precision and attention predicted 23%–37% of children's language ability.
Conclusions
Audiovisual temporal processing is impaired in children with H-SLI. The degree of this impairment is a predictor of language skills. Once understood, the mechanisms underlying this deficit may become a new focus for language remediation.

from #Audiology via ola Kala on Inoreader http://article/60/8/2259/2644822/Sensitivity-to-Audiovisual-Temporal-Asynchrony-in
via IFTTT

Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

Purpose
We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes.
Method
Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes.
Results
Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors.
Conclusion
Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants.
Supplemental materials
http://ift.tt/2tHGBXk

from #Audiology via ola Kala on Inoreader http://article/60/8/2321/2645734/Early-Postimplant-Speech-Perception-and-Language
via IFTTT

Applying an Integrative Framework of Executive Function to Preschoolers With Specific Language Impairment

Purpose
The first goal of this research was to compare verbal and nonverbal executive function abilities between preschoolers with and without specific language impairment (SLI). The second goal was to assess the group differences on 4 executive function components in order to determine if the components may be hierarchically related as suggested within a developmental integrative framework of executive function.
Method
This study included 26 4- and 5-year-olds diagnosed with SLI and 26 typically developing age- and sex-matched peers. Participants were tested on verbal and nonverbal measures of sustained selective attention, working memory, inhibition, and shifting.
Results
The SLI group performed worse compared with typically developing children on both verbal and nonverbal measures of sustained selective attention and working memory, the verbal inhibition task, and the nonverbal shifting task. Comparisons of standardized group differences between executive function measures revealed a linear increase with the following order: working memory, inhibition, shifting, and sustained selective attention.
Conclusion
The pattern of results suggests that preschoolers with SLI have deficits in executive functioning compared with typical peers, and deficits are not limited to verbal tasks. A significant linear relationship between group differences across executive function components supports the possibility of a hierarchical relationship between executive function skills.

from #Audiology via ola Kala on Inoreader http://article/60/8/2170/2645739/Applying-an-Integrative-Framework-of-Executive
via IFTTT

Electrophysiological Evidence for the Sources of the Masking Level Difference

Purpose
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD).
Method
A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD.
Results
Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD.
Conclusions
The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.

from #Audiology via ola Kala on Inoreader http://article/60/8/2364/2646849/Electrophysiological-Evidence-for-the-Sources-of
via IFTTT

Identifying the Dimensionality of Oral Language Skills of Children With Typical Development in Preschool Through Fifth Grade

Purpose
Language is a multidimensional construct from prior to the beginning of formal schooling to near the end of elementary school. The primary goals of this study were to identify the dimensionality of language and to determine whether this dimensionality was consistent in children with typical language development from preschool through 5th grade.
Method
In a large sample of 1,895 children, confirmatory factor analysis was conducted with 19–20 measures of language intended to represent 6 factors, including domains of vocabulary and syntax/grammar across modalities of expressive and receptive language, listening comprehension, and vocabulary depth.
Results
A 2-factor model with separate, highly correlated vocabulary and syntax factors provided the best fit to the data, and this model of language dimensionality was consistent from preschool through 5th grade.
Conclusion
This study found that there are fewer dimensions than are often suggested or represented by the myriad subtests in commonly used standardized tests of language. The identified 2-dimensional (vocabulary and syntax) model of language has significant implications for the conceptualization and measurement of the language skills of children in the age range from preschool to 5th grade, including the study of typical and atypical language development, the study of the developmental and educational influences of language, and classification and intervention in clinical practice.
Supplemental Materials
http://ift.tt/2uEshUx

from #Audiology via ola Kala on Inoreader http://article/60/8/2185/2644885/Identifying-the-Dimensionality-of-Oral-Language
via IFTTT

Influences of Phonological Context on Tense Marking in Spanish–English Dual Language Learners

Purpose
The emergence of tense-morpheme marking during language acquisition is highly variable, which confounds the use of tense marking as a diagnostic indicator of language impairment in linguistically diverse populations. In this study, we seek to better understand tense-marking patterns in young bilingual children by comparing phonological influences on marking of 2 word-final tense morphemes.
Method
In spontaneous connected speech samples from 10 Spanish–English dual language learners aged 56–66 months (M = 61.7, SD = 3.4), we examined marking rates of past tense -ed and third person singular -s morphemes in different environments, using multiple measures of phonological context.
Results
Both morphemes were found to exhibit notably contrastive marking patterns in some contexts. Each was most sensitive to a different combination of phonological influences in the verb stem and the following word.
Conclusions
These findings extend existing evidence from monolingual speakers for the influence of word-final phonological context on morpheme production to a bilingual population. Further, novel findings not yet attested in previous research support an expanded consideration of phonological context in clinical decision making and future research related to word-final morphology.

from #Audiology via ola Kala on Inoreader http://article/60/8/2199/2646850/Influences-of-Phonological-Context-on-Tense
via IFTTT

Vocabulary Facilitates Speech Perception in Children With Hearing Aids

Purpose
We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs.
Method
Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5–12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli.
Results
Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs.
Conclusions
Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH.

from #Audiology via ola Kala on Inoreader http://article/60/8/2281/2646497/Vocabulary-Facilitates-Speech-Perception-in
via IFTTT

Normative Study of the Functional Assessment of Verbal Reasoning and Executive Strategies (FAVRES) Test in the French-Canadian Population

Purpose
The Functional Assessment of Verbal Reasoning and Executive Strategies (FAVRES; MacDonald, 2005) test was designed for use by speech-language pathologists to assess verbal reasoning, complex comprehension, discourse, and executive skills during performance on a set of challenging and ecologically valid functional tasks. A recent French version of this test was translated from English; however, it had not undergone standardization. The development of normative data that are linguistically and culturally sensitive to the target population is of importance. The present study aimed to establish normative data for the French version of the FAVRES, a commonly used test with native French–speaking patients with traumatic brain injury in Québec, Canada.
Method
The normative sample consisted of 181 healthy French-speaking adults from various regions across the province of Québec. Age and years of education were factored into the normative model.
Results
Results indicate that age was significantly associated with performance on time, accuracy, reasoning subskills, and rationale criteria, whereas the level of education was significantly associated with accuracy and rationale.
Conclusion
Overall, mean scores on each criterion were relatively lower than in the original English version, which reinforces the importance of using the present normative data when interpreting performance of French speakers who have sustained a traumatic brain injury.

from #Audiology via ola Kala on Inoreader http://article/60/8/2217/2648887/Normative-Study-of-the-Functional-Assessment-of
via IFTTT

Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

Purpose
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues.
Method
Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions.
Results
A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect.
Conclusion
The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

from #Audiology via ola Kala on Inoreader http://article/60/8/2310/2646629/Working-Memory-and-Speech-Recognition-in-Noise
via IFTTT

Shortened Nonword Repetition Task (NWR-S): A Simple, Quick, and Less Expensive Outcome to Identify Children With Combined Specific Language and Reading Impairment

Purpose
The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S).
Method
NWR-S and NWR performance were compared in the previously published data set of Rispens and Baker (2012; N = 88), who compared NWR performance in 5 participant groups: specific language impairment (SLI), reading impairment (RI), both SLI and RI, one control group matched on chronological age, and one control group matched on language age.
Results
Analyses of variance showed that children with SLI + RI performed significantly worse than other participant groups in NWR-S, just as in NWR. Logistic regression analyses showed that both tasks can predict an SLI + RI outcome. NWR-S holds a sensitivity of 82.6% and a specificity of 95.4% in identifying children with SLI + RI. The sensitivity of the original NWR is 87.0% with a specificity of 87.7%.
Conclusions
As the original NWR, the NWR-S comprising a subset of 22 nonwords scored with a simplified scoring system can identify children with combined SLI and RI while saving a significant amount of the needed assessment time.
Supplemental Materials
http://ift.tt/2vdqx0S

from #Audiology via ola Kala on Inoreader http://article/60/8/2241/2644493/Shortened-Nonword-Repetition-Task-NWRS-A-Simple
via IFTTT

Auditory Training for Adults Who Have Hearing Loss: A Comparison of Spaced Versus Massed Practice Schedules

Purpose
The spacing effect in human memory research refers to situations in which people learn items better when they study items in spaced intervals rather than massed intervals. This investigation was conducted to compare the efficacy of meaning-oriented auditory training when administered with a spaced versus massed practice schedule.
Method
Forty-seven adult hearing aid users received 16 hr of auditory training. Participants in a spaced group (mean age = 64.6 years, SD = 14.7) trained twice per week, and participants in a massed group (mean age = 69.6 years, SD = 17.5) trained for 5 consecutive days each week. Participants completed speech perception tests before training, immediately following training, and then 3 months later. In line with transfer appropriate processing theory, tests assessed both trained tasks and an untrained task.
Results
Auditory training improved the speech recognition performance of participants in both groups. Benefits were maintained for 3 months. No effect of practice schedule was found on overall benefits achieved, on retention of benefits, nor on generalizability of benefits to nontrained tasks.
Conclusion
The lack of spacing effect in otherwise effective auditory training suggests that perceptual learning may be subject to different influences than are other types of learning, such as vocabulary learning. Hence, clinicians might have latitude in recommending training schedules to accommodate patients' schedules.

from #Audiology via ola Kala on Inoreader http://article/60/8/2337/2648749/Auditory-Training-for-Adults-Who-Have-Hearing-Loss
via IFTTT

Visuospatial and Verbal Short-Term Memory Correlates of Vocabulary Ability in Preschool Children

Background
Recent studies indicate that school-age children's patterns of performance on measures of verbal and visuospatial short-term memory (STM) and working memory (WM) differ across types of neurodevelopmental disorders. Because these disorders are often characterized by early language delay, administering STM and WM tests to toddlers could improve prediction of neurodevelopmental outcomes. Toddler-appropriate verbal, but not visuospatial, STM and WM tasks are available. A toddler-appropriate visuospatial STM test is introduced.
Method
Tests of verbal STM, visuospatial STM, expressive vocabulary, and receptive vocabulary were administered to 92 English-speaking children aged 2–5 years.
Results
Mean test scores did not differ for boys and girls. Visuospatial and verbal STM scores were not significantly correlated when age was partialed out. Age, visuospatial STM scores, and verbal STM scores accounted for unique variance in expressive (51%, 3%, and 4%, respectively) and receptive vocabulary scores (53%, 5%, and 2%, respectively) in multiple regression analyses.
Conclusion
Replication studies, a fuller test battery comprising visuospatial and verbal STM and WM tests, and a general intelligence test are required before exploring the usefulness of these STM tests for predicting longitudinal outcomes. The lack of an association between the STM tests suggests that the instruments have face validity and test independent STM skills.

from #Audiology via ola Kala on Inoreader http://article/60/8/2249/2648886/Visuospatial-and-Verbal-ShortTerm-Memory
via IFTTT