Πέμπτη 15 Φεβρουαρίου 2018

Performance on Auditory and Visual Tasks of Inhibition in English Monolingual and Spanish–English Bilingual Adults: Do Bilinguals Have a Cognitive Advantage?

Purpose
Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The purpose of the current study was to examine inhibition of irrelevant information using auditory and visual tasks in English monolingual and Spanish–English bilingual adults.
Method
Twenty English monolinguals and 19 early balanced Spanish–English bilinguals participated in this study. All participants were 18–30 years of age, had hearing thresholds < 25 dB HL from 250 to 8000 Hz, bilaterally (American National Standards Institute, 2003), and were right handed. Inhibition was measured using a forced-attention dichotic consonant–vowel listening task and the Simon task, a nonverbal visual test.
Results
Both groups of participants demonstrated a significant right ear advantage on the dichotic listening task; however, no significant differences in performance were evidenced between the monolingual and bilingual groups in any of the dichotic listening conditions. Both groups performed better on the congruent trial than on the incongruent trial of the Simon task and had significantly faster response times on the congruent trial than on the incongruent trial. However, there were no significant differences in performance between the monolingual and bilingual groups on the visual test of inhibition.
Conclusions
No significant differences in performance on auditory and visual tests of inhibition of irrelevant information were evidenced between the monolingual and bilingual participants in this study. These findings suggest that bilinguals may not exhibit an advantage in the inhibition of irrelevant information compared with monolinguals.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EqRoMH
via IFTTT

Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

Purpose
The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns.
Method
The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both.
Results
Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN.
Conclusions
CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EqlBhs
via IFTTT

Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit

Purpose
Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate.
Method
We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis.
Results
Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency.
Conclusions
Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FBwIB8
via IFTTT

Mechanisms of Vowel Variation in African American English

Purpose
This research explored mechanisms of vowel variation in African American English by comparing 2 geographically distant groups of African American and White American English speakers for participation in the African American Shift and the Southern Vowel Shift.
Method
Thirty-two male (African American: n = 16, White American controls: n = 16) lifelong residents of cities in eastern and western North Carolina produced heed, hid, heyd, head, had, hod, hawed, whod, hood, hoed, hide, howed, hoyd, and heard 3 times each in random order. Formant frequency, duration, and acoustic analyses were completed for the vowels /i, ɪ, e, ɛ, æ, ɑ, ɔ, u, ʊ, o, aɪ, aʊ, oɪ, ɝ/ produced in the listed words.
Results
African American English speakers show vowel variation. In the west, the African American English speakers are participating in the Southern Vowel Shift and hod fronting of the African American Shift. In the east, neither the African American English speakers nor their White peers are participating in the Southern Vowel Shift. The African American English speakers show limited participation in the African American Shift.
Conclusion
The results provide evidence of regional and socio-ethnic variation in African American English in North Carolina.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E5Rj39
via IFTTT

Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

Purpose
The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial attributions, and perceived speech tempo. We also examined the voice dimensions affecting the propensity to engage in social interactions.
Method
Twenty-five younger (age 19–37 years) and 25 older (age 51–74 years) healthy adults participated in this cross-sectional study. Their task was to evaluate the voice of 80 talkers.
Results
Statistical analyses revealed limited effects of the age of the listener on voice evaluation. Specifically, older listeners provided relatively more favorable voice ratings than younger listeners, mainly in terms of roughness. In contrast, the age of the talker had a broader impact on voice evaluation, affecting auditory-perceptual evaluations, psychosocial attributions, and perceived speech tempo. Some of these talker differences were dependent upon the sex of the talker and his or her smoking status. Finally, the results also show that voice-related psychosocial attribution was more strongly associated with the propensity of the listener to engage in social interactions with a person than auditory-perceptual dimensions and perceived speech tempo, especially for the younger adults.
Conclusions
These results suggest that age has a broad influence on voice evaluation, with a stronger impact for talker age compared with listener age. While voice-related psychosocial attributions may be an important determinant of social interactions, perceived voice quality and speech tempo appear to be less influential.
Supplemental Materials
https://doi.org/10.23641/asha.5844102

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DSbNZV
via IFTTT

Erratum



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Ea2esZ
via IFTTT

Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development

Purpose
We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant.
Method
Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed).
Results
Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest.
Conclusion
Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E7Z05L
via IFTTT

Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

Purpose
The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the target. We also assessed whether the spectral resolution of the noise-vocoded stimuli affected the presence of LRM and SRM under these conditions.
Method
In Experiment 1, a mixed factorial design was used to simultaneously manipulate the masker language (within-subject, English vs. Dutch), the simulated masker location (within-subject, right, center, left), and the spectral resolution (between-subjects, 6 vs. 12 channels) of noise-vocoded target–masker combinations presented at +25 dB signal-to-noise ratio (SNR). In Experiment 2, the study was repeated using a spectral resolution of 12 channels at +15 dB SNR.
Results
In both experiments, listeners' intelligibility of noise-vocoded targets was better when the background masker was Dutch, demonstrating reliable LRM in all conditions. The pattern of results in Experiment 1 was not reliably different across the 6- and 12-channel noise-vocoded speech. Finally, a reliable spatial benefit (SRM) was detected only in the more challenging SNR condition (Experiment 2).
Conclusion
The current study is the first to report a clear LRM benefit in noise-vocoded speech-in-speech recognition. Our results indicate that this benefit is available even under spectrally degraded conditions and that it may augment the benefit due to spatial separation of target speech and competing backgrounds.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EA2u24
via IFTTT

Lingual Pressure as a Clinical Indicator of Swallowing Function in Parkinson's Disease

Purpose
Swallowing impairment, or dysphagia, is a known contributor to reduced quality of life, pneumonia, and mortality in Parkinson's disease (PD). However, the contribution of tongue dysfunction, specifically inadequate pressure generation, to dysphagia in PD remains unclear. Our purpose was to determine whether lingual pressures in PD are (a) reduced, (b) reflect medication state, or are (c) consistent with self-reported diet and swallowing function.
Method
Twenty-eight persons with idiopathic PD (PwPD) and 28 age- and sex-matched controls completed lingual pressure tasks with the Iowa Oral Performance Instrument. PwPD were tested during practically defined ON and OFF dopaminergic medication states. Participants were also stratified into three sex- and age-matched cohorts (7 men, 5 women): (a) controls, (b) PwPD without self-reported dysphagia symptoms or diet restrictions, and (c) PwPD with self-reported dysphagia symptoms with or without diet restrictions.
Results
PwPD exhibited reduced tongue strength and used elevated proportions of tongue strength during swallowing compared with controls (p < .05) without an effect of medication state (p > .05). Reduced tongue strength distinguished PwPD with self-reported dysphagia symptoms from PwPD without reported symptoms or diet restrictions (p = .045) and controls (p = .002).
Conclusion
Tongue strength was significantly reduced in PwPD and did not differ by medication state. Tongue strength differentiated between PwPD with and without self-reported swallowing symptoms. Therefore, measures of tongue strength and swallowing pressures may serve as clinical indicators for further dysphagia evaluation and may promote early diagnosis and management of dysphagia in PD.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DVaN7m
via IFTTT

Changing the Subject: The Place of Revisions in Grammatical Development

Purpose
This article focuses on toddlers' revisions of the sentence subject and tests the hypothesis that subject diversity (i.e., the number of different subjects produced) increases the probability of subject revision.
Method
One-hour language samples were collected from 61 children (32 girls) at 27 months. Spontaneously produced, active declarative sentences (ADSs) were analyzed for subject diversity and the presence of subject revision and repetition. The number of different words produced, mean length of utterance, tense/agreement productivity score, and the number of ADSs were also measured.
Results
Regression analyses were performed with revision and repetition as the dependent variables. Subject diversity significantly predicted the probability of revision, whereas the number of ADSs predicted the probability of repetition.
Conclusion
The results support the hypothesis that subject diversity increases the probability of subject revision. It is proposed that lexical diversity within specific syntactic positions is the primary mechanism whereby revision rates increase with grammatical development. The results underscore the need to differentiate repetition from revision in the classification of disfluencies.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E1QzK0
via IFTTT

Voice, Articulation, and Prosody Contribute to Listener Perceptions of Speaker Gender: A Systematic Review and Meta-Analysis

Purpose
The aim of this study was to provide a systematic review of the aspects of verbal communication contributing to listener perceptions of speaker gender with a view to providing clinicians with guidance for the selection of the training goals when working with transsexual individuals.
Method
Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) guidelines were adopted in this systematic review. Studies evaluating the contribution of aspects of verbal communication to listener perceptions of speaker gender were rated against a new risk of bias assessment tool. Relevant data were extracted, and narrative synthesis was then conducted. Meta-analyses were conducted when appropriate data were available.
Results
Thirty-eight articles met the eligibility criteria. Meta-analysis showed speaking fundamental frequency contributing to 41.6% of the variance in gender perception. Auditory-perceptual and acoustic measures of pitch, resonance, loudness, articulation, and intonation were found to be associated with listeners' perceptions of speaker gender. Tempo and stress were not significantly associated. Mixed findings were found as to the contribution of a breathy voice quality to gender perception. Nonetheless, there exists significant risk of bias in this body of research.
Conclusions
Speech and language clinicians working with transsexual individuals may use the results of this review for goal setting. Further research is required to redress the significant risk of bias.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FEnvb7
via IFTTT

Cognitive Profiles of Finnish Preschool Children With Expressive and Receptive Language Impairment

Purpose
The aim of this study was to compare the verbal and nonverbal cognitive profiles of children with specific language impairment (SLI) with problems predominantly in expressive (SLI-E) or receptive (SLI-R) language skills. These diagnostic subgroups have not been compared before in psychological studies.
Method
Participants were preschool-age Finnish-speaking children with SLI diagnosed by a multidisciplinary team. Cognitive profile differences between the diagnostic subgroups and the relationship between verbal and nonverbal reasoning skills were evaluated.
Results
Performance was worse for the SLI-R subgroup than for the SLI-E subgroup not only in verbal reasoning and short-term memory but also in nonverbal reasoning, and several nonverbal subtests correlated significantly with the composite verbal index. However, weaknesses and strengths in the cognitive profiles of the subgroups were parallel.
Conclusions
Poor verbal comprehension and reasoning skills seem to be associated with lower nonverbal performance in children with SLI. Performance index (Performance Intelligence Quotient) may not always represent the intact nonverbal capacity assumed in SLI diagnostics, and a broader assessment is recommended when a child fails any of the compulsory Performance Intelligence Quotient subtests. Differences between the SLI subgroups appear quantitative rather than qualitative, in line with the new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM V) classification (American Psychiatric Association, 2013).

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Em2uVQ
via IFTTT

A Meta-Analysis: Acoustic Measurement of Roughness and Breathiness

Purpose
Over the last 5 decades, many acoustic measures have been created to measure roughness and breathiness. The aim of this study is to present a meta-analysis of correlation coefficients (r) between auditory-perceptual judgment of roughness and breathiness and various acoustic measures in both sustained vowels and continuous speech.
Method
Scientific literature reporting perceptual–acoustic correlations on roughness and breathiness were sought in 28 databases. Weighted average correlation coefficients (r w) were calculated when multiple r-values were available for a specific acoustic marker. An r w ≥ .60 was the threshold for an acoustic measure to be considered acceptable.
Results
From 103 studies of roughness and 107 studies of breathiness that were investigated, only 33 studies and 34 studies, respectively, met the inclusion criteria of the meta-analysis on sustained vowels. Eighty-six acoustic measures were identified for roughness and 85 acoustic measures for breathiness on sustained vowels, in which 43 and 39 measures, respectively, yielded multiple r-values. Finally, only 14 measures for roughness and 12 measures for breathiness produced r w ≥ .60. On continuous speech, 4 measures for roughness and 21 measures for breathiness were identified, yielding 3 and 6 measures, respectively, with multiple r-values in which only 1 and 2, respectively, had r w ≥ .60.
Conclusion
This meta-analysis showed that only a few acoustic parameters were determined as the best estimators for roughness and breathiness.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2E6Oy1C
via IFTTT

Remote Microphone System Use at Home: Impact on Caregiver Talk

Purpose
The purpose of this study was to investigate the effects of home use of a remote microphone system (RMS) on the spoken language production of caregivers with young children who have hearing loss.
Method
Language Environment Analysis recorders were used with 10 families during 2 consecutive weekends (RMS weekend and No-RMS weekend). The amount of talk from a single caregiver that could be made accessible to children with hearing loss when using an RMS was estimated using Language Environment Analysis software. The total amount of caregiver talk (close and far talk) was also compared across both weekends. In addition, caregivers' perceptions of RMS use were gathered.
Results
Children, with the use of RMSs, could potentially have access to approximately 42% more words per day. In addition, although caregivers produced an equivalent number of words on both weekends, they tended to talk more from a distance when using the RMS than when not. Finally, caregivers reported positive perceived communication benefits of RMS use.
Conclusions
Findings from this investigation suggest that children with hearing loss have increased access to caregiver talk when using an RMS in the home environment. Clinical implications and future directions for research are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EOV363
via IFTTT

Well-Being and Resilience in Children With Speech and Language Disorders

Purpose
Children with speech and language disorders are at risk in relation to psychological and social well-being. The aim of this study was to understand the experiences of these children from their own perspectives focusing on risks to their well-being and protective indicators that may promote resilience.
Method
Eleven 9- to 12-year-old children (4 boys and 7 girls) were recruited using purposeful sampling. One participant presented with a speech sound disorder, 1 presented with both a speech and language disorder, and 9 with language disorders. All were receiving additional educational supports. Narrative inquiry, a qualitative design, was employed. Data were generated in home and school settings using multiple semi-structured interviews with each child over a 6-month period. A total of 59 interviews were conducted. The data were analyzed to identify themes in relation to potential risk factors to well-being and protective strategies.
Results
Potential risk factors in relation to well-being were communication impairment and disability, difficulties with relationships, and concern about academic achievement. Potential protective strategies were hope, agency, and positive relationships.
Conclusion
This study highlights the importance of listening to children's narratives so that those at risk in relation to well-being can be identified. Conceptualization of well-being and resilience within an ecological framework may enable identification of protective strategies at both individual and environmental levels that can be strengthened to mitigate negative experiences.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DRHG8f
via IFTTT

The Effect of Remote Masking on the Reception of Speech by Young School-Age Children

Purpose
Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults.
Method
Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups.
Results
It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects.
Conclusions
In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Ey83xO
via IFTTT

A Narrative Evaluation of Mandarin-Speaking Children With Language Impairment

Purpose
We aimed to study narrative skills in Mandarin-speaking children with language impairment (LI) to compare with children with LI speaking Indo-European languages.
Method
Eighteen Mandarin-speaking children with LI (mean age 6;2 [years;months]) and 18 typically developing (TD) age controls told 3 stories elicited using the Mandarin Expressive Narrative Test (de Villiers & Liu, 2014). We compared macrostructure-evaluating descriptions of characters, settings, initiating events, internal responses,plans, actions, and consequences. We also studied general microstructure, including productivity, lexical diversity, syntactic complexity, and grammaticality. In addition, we compared the use of 6 fine-grained microstructure elements that evaluate particular Mandarin linguistic features.
Results
Children with LI exhibited weaknesses in 5 macrostructure elements, lexical diversity, syntactic complexity, and 3 Mandarin-specific, fine-grained microstructure elements. Children with LI and TD controls demonstrated comparable performance on 2 macrostructure elements, productivity, grammaticality, and the remaining 3 fine-grained microstructure features.
Conclusions
Similarities and differences are noted in narrative profiles of children with LI who speak Mandarin versus those who speak Indo-European languages. The results are consistent with the view that profiles of linguistic deficits are shaped by the ambient language. Clinical implications are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FigjkE
via IFTTT

Masthead



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2EvmFRY
via IFTTT

Performance on Auditory and Visual Tasks of Inhibition in English Monolingual and Spanish–English Bilingual Adults: Do Bilinguals Have a Cognitive Advantage?

Purpose
Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The purpose of the current study was to examine inhibition of irrelevant information using auditory and visual tasks in English monolingual and Spanish–English bilingual adults.
Method
Twenty English monolinguals and 19 early balanced Spanish–English bilinguals participated in this study. All participants were 18–30 years of age, had hearing thresholds < 25 dB HL from 250 to 8000 Hz, bilaterally (American National Standards Institute, 2003), and were right handed. Inhibition was measured using a forced-attention dichotic consonant–vowel listening task and the Simon task, a nonverbal visual test.
Results
Both groups of participants demonstrated a significant right ear advantage on the dichotic listening task; however, no significant differences in performance were evidenced between the monolingual and bilingual groups in any of the dichotic listening conditions. Both groups performed better on the congruent trial than on the incongruent trial of the Simon task and had significantly faster response times on the congruent trial than on the incongruent trial. However, there were no significant differences in performance between the monolingual and bilingual groups on the visual test of inhibition.
Conclusions
No significant differences in performance on auditory and visual tests of inhibition of irrelevant information were evidenced between the monolingual and bilingual participants in this study. These findings suggest that bilinguals may not exhibit an advantage in the inhibition of irrelevant information compared with monolinguals.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EqRoMH
via IFTTT

Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

Purpose
The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns.
Method
The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both.
Results
Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN.
Conclusions
CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EqlBhs
via IFTTT

Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit

Purpose
Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate.
Method
We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis.
Results
Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency.
Conclusions
Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FBwIB8
via IFTTT

Mechanisms of Vowel Variation in African American English

Purpose
This research explored mechanisms of vowel variation in African American English by comparing 2 geographically distant groups of African American and White American English speakers for participation in the African American Shift and the Southern Vowel Shift.
Method
Thirty-two male (African American: n = 16, White American controls: n = 16) lifelong residents of cities in eastern and western North Carolina produced heed, hid, heyd, head, had, hod, hawed, whod, hood, hoed, hide, howed, hoyd, and heard 3 times each in random order. Formant frequency, duration, and acoustic analyses were completed for the vowels /i, ɪ, e, ɛ, æ, ɑ, ɔ, u, ʊ, o, aɪ, aʊ, oɪ, ɝ/ produced in the listed words.
Results
African American English speakers show vowel variation. In the west, the African American English speakers are participating in the Southern Vowel Shift and hod fronting of the African American Shift. In the east, neither the African American English speakers nor their White peers are participating in the Southern Vowel Shift. The African American English speakers show limited participation in the African American Shift.
Conclusion
The results provide evidence of regional and socio-ethnic variation in African American English in North Carolina.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E5Rj39
via IFTTT

Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

Purpose
The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial attributions, and perceived speech tempo. We also examined the voice dimensions affecting the propensity to engage in social interactions.
Method
Twenty-five younger (age 19–37 years) and 25 older (age 51–74 years) healthy adults participated in this cross-sectional study. Their task was to evaluate the voice of 80 talkers.
Results
Statistical analyses revealed limited effects of the age of the listener on voice evaluation. Specifically, older listeners provided relatively more favorable voice ratings than younger listeners, mainly in terms of roughness. In contrast, the age of the talker had a broader impact on voice evaluation, affecting auditory-perceptual evaluations, psychosocial attributions, and perceived speech tempo. Some of these talker differences were dependent upon the sex of the talker and his or her smoking status. Finally, the results also show that voice-related psychosocial attribution was more strongly associated with the propensity of the listener to engage in social interactions with a person than auditory-perceptual dimensions and perceived speech tempo, especially for the younger adults.
Conclusions
These results suggest that age has a broad influence on voice evaluation, with a stronger impact for talker age compared with listener age. While voice-related psychosocial attributions may be an important determinant of social interactions, perceived voice quality and speech tempo appear to be less influential.
Supplemental Materials
https://doi.org/10.23641/asha.5844102

from #Audiology via ola Kala on Inoreader http://ift.tt/2DSbNZV
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader http://ift.tt/2Ea2esZ
via IFTTT

Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development

Purpose
We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant.
Method
Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed).
Results
Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest.
Conclusion
Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E7Z05L
via IFTTT

Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

Purpose
The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the target. We also assessed whether the spectral resolution of the noise-vocoded stimuli affected the presence of LRM and SRM under these conditions.
Method
In Experiment 1, a mixed factorial design was used to simultaneously manipulate the masker language (within-subject, English vs. Dutch), the simulated masker location (within-subject, right, center, left), and the spectral resolution (between-subjects, 6 vs. 12 channels) of noise-vocoded target–masker combinations presented at +25 dB signal-to-noise ratio (SNR). In Experiment 2, the study was repeated using a spectral resolution of 12 channels at +15 dB SNR.
Results
In both experiments, listeners' intelligibility of noise-vocoded targets was better when the background masker was Dutch, demonstrating reliable LRM in all conditions. The pattern of results in Experiment 1 was not reliably different across the 6- and 12-channel noise-vocoded speech. Finally, a reliable spatial benefit (SRM) was detected only in the more challenging SNR condition (Experiment 2).
Conclusion
The current study is the first to report a clear LRM benefit in noise-vocoded speech-in-speech recognition. Our results indicate that this benefit is available even under spectrally degraded conditions and that it may augment the benefit due to spatial separation of target speech and competing backgrounds.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EA2u24
via IFTTT

Lingual Pressure as a Clinical Indicator of Swallowing Function in Parkinson's Disease

Purpose
Swallowing impairment, or dysphagia, is a known contributor to reduced quality of life, pneumonia, and mortality in Parkinson's disease (PD). However, the contribution of tongue dysfunction, specifically inadequate pressure generation, to dysphagia in PD remains unclear. Our purpose was to determine whether lingual pressures in PD are (a) reduced, (b) reflect medication state, or are (c) consistent with self-reported diet and swallowing function.
Method
Twenty-eight persons with idiopathic PD (PwPD) and 28 age- and sex-matched controls completed lingual pressure tasks with the Iowa Oral Performance Instrument. PwPD were tested during practically defined ON and OFF dopaminergic medication states. Participants were also stratified into three sex- and age-matched cohorts (7 men, 5 women): (a) controls, (b) PwPD without self-reported dysphagia symptoms or diet restrictions, and (c) PwPD with self-reported dysphagia symptoms with or without diet restrictions.
Results
PwPD exhibited reduced tongue strength and used elevated proportions of tongue strength during swallowing compared with controls (p < .05) without an effect of medication state (p > .05). Reduced tongue strength distinguished PwPD with self-reported dysphagia symptoms from PwPD without reported symptoms or diet restrictions (p = .045) and controls (p = .002).
Conclusion
Tongue strength was significantly reduced in PwPD and did not differ by medication state. Tongue strength differentiated between PwPD with and without self-reported swallowing symptoms. Therefore, measures of tongue strength and swallowing pressures may serve as clinical indicators for further dysphagia evaluation and may promote early diagnosis and management of dysphagia in PD.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DVaN7m
via IFTTT

Changing the Subject: The Place of Revisions in Grammatical Development

Purpose
This article focuses on toddlers' revisions of the sentence subject and tests the hypothesis that subject diversity (i.e., the number of different subjects produced) increases the probability of subject revision.
Method
One-hour language samples were collected from 61 children (32 girls) at 27 months. Spontaneously produced, active declarative sentences (ADSs) were analyzed for subject diversity and the presence of subject revision and repetition. The number of different words produced, mean length of utterance, tense/agreement productivity score, and the number of ADSs were also measured.
Results
Regression analyses were performed with revision and repetition as the dependent variables. Subject diversity significantly predicted the probability of revision, whereas the number of ADSs predicted the probability of repetition.
Conclusion
The results support the hypothesis that subject diversity increases the probability of subject revision. It is proposed that lexical diversity within specific syntactic positions is the primary mechanism whereby revision rates increase with grammatical development. The results underscore the need to differentiate repetition from revision in the classification of disfluencies.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E1QzK0
via IFTTT

Voice, Articulation, and Prosody Contribute to Listener Perceptions of Speaker Gender: A Systematic Review and Meta-Analysis

Purpose
The aim of this study was to provide a systematic review of the aspects of verbal communication contributing to listener perceptions of speaker gender with a view to providing clinicians with guidance for the selection of the training goals when working with transsexual individuals.
Method
Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) guidelines were adopted in this systematic review. Studies evaluating the contribution of aspects of verbal communication to listener perceptions of speaker gender were rated against a new risk of bias assessment tool. Relevant data were extracted, and narrative synthesis was then conducted. Meta-analyses were conducted when appropriate data were available.
Results
Thirty-eight articles met the eligibility criteria. Meta-analysis showed speaking fundamental frequency contributing to 41.6% of the variance in gender perception. Auditory-perceptual and acoustic measures of pitch, resonance, loudness, articulation, and intonation were found to be associated with listeners' perceptions of speaker gender. Tempo and stress were not significantly associated. Mixed findings were found as to the contribution of a breathy voice quality to gender perception. Nonetheless, there exists significant risk of bias in this body of research.
Conclusions
Speech and language clinicians working with transsexual individuals may use the results of this review for goal setting. Further research is required to redress the significant risk of bias.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FEnvb7
via IFTTT

Cognitive Profiles of Finnish Preschool Children With Expressive and Receptive Language Impairment

Purpose
The aim of this study was to compare the verbal and nonverbal cognitive profiles of children with specific language impairment (SLI) with problems predominantly in expressive (SLI-E) or receptive (SLI-R) language skills. These diagnostic subgroups have not been compared before in psychological studies.
Method
Participants were preschool-age Finnish-speaking children with SLI diagnosed by a multidisciplinary team. Cognitive profile differences between the diagnostic subgroups and the relationship between verbal and nonverbal reasoning skills were evaluated.
Results
Performance was worse for the SLI-R subgroup than for the SLI-E subgroup not only in verbal reasoning and short-term memory but also in nonverbal reasoning, and several nonverbal subtests correlated significantly with the composite verbal index. However, weaknesses and strengths in the cognitive profiles of the subgroups were parallel.
Conclusions
Poor verbal comprehension and reasoning skills seem to be associated with lower nonverbal performance in children with SLI. Performance index (Performance Intelligence Quotient) may not always represent the intact nonverbal capacity assumed in SLI diagnostics, and a broader assessment is recommended when a child fails any of the compulsory Performance Intelligence Quotient subtests. Differences between the SLI subgroups appear quantitative rather than qualitative, in line with the new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM V) classification (American Psychiatric Association, 2013).

from #Audiology via ola Kala on Inoreader http://ift.tt/2Em2uVQ
via IFTTT

A Meta-Analysis: Acoustic Measurement of Roughness and Breathiness

Purpose
Over the last 5 decades, many acoustic measures have been created to measure roughness and breathiness. The aim of this study is to present a meta-analysis of correlation coefficients (r) between auditory-perceptual judgment of roughness and breathiness and various acoustic measures in both sustained vowels and continuous speech.
Method
Scientific literature reporting perceptual–acoustic correlations on roughness and breathiness were sought in 28 databases. Weighted average correlation coefficients (r w) were calculated when multiple r-values were available for a specific acoustic marker. An r w ≥ .60 was the threshold for an acoustic measure to be considered acceptable.
Results
From 103 studies of roughness and 107 studies of breathiness that were investigated, only 33 studies and 34 studies, respectively, met the inclusion criteria of the meta-analysis on sustained vowels. Eighty-six acoustic measures were identified for roughness and 85 acoustic measures for breathiness on sustained vowels, in which 43 and 39 measures, respectively, yielded multiple r-values. Finally, only 14 measures for roughness and 12 measures for breathiness produced r w ≥ .60. On continuous speech, 4 measures for roughness and 21 measures for breathiness were identified, yielding 3 and 6 measures, respectively, with multiple r-values in which only 1 and 2, respectively, had r w ≥ .60.
Conclusion
This meta-analysis showed that only a few acoustic parameters were determined as the best estimators for roughness and breathiness.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6Oy1C
via IFTTT

Remote Microphone System Use at Home: Impact on Caregiver Talk

Purpose
The purpose of this study was to investigate the effects of home use of a remote microphone system (RMS) on the spoken language production of caregivers with young children who have hearing loss.
Method
Language Environment Analysis recorders were used with 10 families during 2 consecutive weekends (RMS weekend and No-RMS weekend). The amount of talk from a single caregiver that could be made accessible to children with hearing loss when using an RMS was estimated using Language Environment Analysis software. The total amount of caregiver talk (close and far talk) was also compared across both weekends. In addition, caregivers' perceptions of RMS use were gathered.
Results
Children, with the use of RMSs, could potentially have access to approximately 42% more words per day. In addition, although caregivers produced an equivalent number of words on both weekends, they tended to talk more from a distance when using the RMS than when not. Finally, caregivers reported positive perceived communication benefits of RMS use.
Conclusions
Findings from this investigation suggest that children with hearing loss have increased access to caregiver talk when using an RMS in the home environment. Clinical implications and future directions for research are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EOV363
via IFTTT

Well-Being and Resilience in Children With Speech and Language Disorders

Purpose
Children with speech and language disorders are at risk in relation to psychological and social well-being. The aim of this study was to understand the experiences of these children from their own perspectives focusing on risks to their well-being and protective indicators that may promote resilience.
Method
Eleven 9- to 12-year-old children (4 boys and 7 girls) were recruited using purposeful sampling. One participant presented with a speech sound disorder, 1 presented with both a speech and language disorder, and 9 with language disorders. All were receiving additional educational supports. Narrative inquiry, a qualitative design, was employed. Data were generated in home and school settings using multiple semi-structured interviews with each child over a 6-month period. A total of 59 interviews were conducted. The data were analyzed to identify themes in relation to potential risk factors to well-being and protective strategies.
Results
Potential risk factors in relation to well-being were communication impairment and disability, difficulties with relationships, and concern about academic achievement. Potential protective strategies were hope, agency, and positive relationships.
Conclusion
This study highlights the importance of listening to children's narratives so that those at risk in relation to well-being can be identified. Conceptualization of well-being and resilience within an ecological framework may enable identification of protective strategies at both individual and environmental levels that can be strengthened to mitigate negative experiences.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DRHG8f
via IFTTT

The Effect of Remote Masking on the Reception of Speech by Young School-Age Children

Purpose
Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults.
Method
Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups.
Results
It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects.
Conclusions
In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2Ey83xO
via IFTTT

A Narrative Evaluation of Mandarin-Speaking Children With Language Impairment

Purpose
We aimed to study narrative skills in Mandarin-speaking children with language impairment (LI) to compare with children with LI speaking Indo-European languages.
Method
Eighteen Mandarin-speaking children with LI (mean age 6;2 [years;months]) and 18 typically developing (TD) age controls told 3 stories elicited using the Mandarin Expressive Narrative Test (de Villiers & Liu, 2014). We compared macrostructure-evaluating descriptions of characters, settings, initiating events, internal responses,plans, actions, and consequences. We also studied general microstructure, including productivity, lexical diversity, syntactic complexity, and grammaticality. In addition, we compared the use of 6 fine-grained microstructure elements that evaluate particular Mandarin linguistic features.
Results
Children with LI exhibited weaknesses in 5 macrostructure elements, lexical diversity, syntactic complexity, and 3 Mandarin-specific, fine-grained microstructure elements. Children with LI and TD controls demonstrated comparable performance on 2 macrostructure elements, productivity, grammaticality, and the remaining 3 fine-grained microstructure features.
Conclusions
Similarities and differences are noted in narrative profiles of children with LI who speak Mandarin versus those who speak Indo-European languages. The results are consistent with the view that profiles of linguistic deficits are shaped by the ambient language. Clinical implications are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FigjkE
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader http://ift.tt/2EvmFRY
via IFTTT

Performance on Auditory and Visual Tasks of Inhibition in English Monolingual and Spanish–English Bilingual Adults: Do Bilinguals Have a Cognitive Advantage?

Purpose
Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The purpose of the current study was to examine inhibition of irrelevant information using auditory and visual tasks in English monolingual and Spanish–English bilingual adults.
Method
Twenty English monolinguals and 19 early balanced Spanish–English bilinguals participated in this study. All participants were 18–30 years of age, had hearing thresholds < 25 dB HL from 250 to 8000 Hz, bilaterally (American National Standards Institute, 2003), and were right handed. Inhibition was measured using a forced-attention dichotic consonant–vowel listening task and the Simon task, a nonverbal visual test.
Results
Both groups of participants demonstrated a significant right ear advantage on the dichotic listening task; however, no significant differences in performance were evidenced between the monolingual and bilingual groups in any of the dichotic listening conditions. Both groups performed better on the congruent trial than on the incongruent trial of the Simon task and had significantly faster response times on the congruent trial than on the incongruent trial. However, there were no significant differences in performance between the monolingual and bilingual groups on the visual test of inhibition.
Conclusions
No significant differences in performance on auditory and visual tests of inhibition of irrelevant information were evidenced between the monolingual and bilingual participants in this study. These findings suggest that bilinguals may not exhibit an advantage in the inhibition of irrelevant information compared with monolinguals.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EqRoMH
via IFTTT

Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

Purpose
The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns.
Method
The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both.
Results
Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN.
Conclusions
CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EqlBhs
via IFTTT

Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit

Purpose
Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate.
Method
We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis.
Results
Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency.
Conclusions
Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FBwIB8
via IFTTT

Mechanisms of Vowel Variation in African American English

Purpose
This research explored mechanisms of vowel variation in African American English by comparing 2 geographically distant groups of African American and White American English speakers for participation in the African American Shift and the Southern Vowel Shift.
Method
Thirty-two male (African American: n = 16, White American controls: n = 16) lifelong residents of cities in eastern and western North Carolina produced heed, hid, heyd, head, had, hod, hawed, whod, hood, hoed, hide, howed, hoyd, and heard 3 times each in random order. Formant frequency, duration, and acoustic analyses were completed for the vowels /i, ɪ, e, ɛ, æ, ɑ, ɔ, u, ʊ, o, aɪ, aʊ, oɪ, ɝ/ produced in the listed words.
Results
African American English speakers show vowel variation. In the west, the African American English speakers are participating in the Southern Vowel Shift and hod fronting of the African American Shift. In the east, neither the African American English speakers nor their White peers are participating in the Southern Vowel Shift. The African American English speakers show limited participation in the African American Shift.
Conclusion
The results provide evidence of regional and socio-ethnic variation in African American English in North Carolina.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E5Rj39
via IFTTT

Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

Purpose
The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial attributions, and perceived speech tempo. We also examined the voice dimensions affecting the propensity to engage in social interactions.
Method
Twenty-five younger (age 19–37 years) and 25 older (age 51–74 years) healthy adults participated in this cross-sectional study. Their task was to evaluate the voice of 80 talkers.
Results
Statistical analyses revealed limited effects of the age of the listener on voice evaluation. Specifically, older listeners provided relatively more favorable voice ratings than younger listeners, mainly in terms of roughness. In contrast, the age of the talker had a broader impact on voice evaluation, affecting auditory-perceptual evaluations, psychosocial attributions, and perceived speech tempo. Some of these talker differences were dependent upon the sex of the talker and his or her smoking status. Finally, the results also show that voice-related psychosocial attribution was more strongly associated with the propensity of the listener to engage in social interactions with a person than auditory-perceptual dimensions and perceived speech tempo, especially for the younger adults.
Conclusions
These results suggest that age has a broad influence on voice evaluation, with a stronger impact for talker age compared with listener age. While voice-related psychosocial attributions may be an important determinant of social interactions, perceived voice quality and speech tempo appear to be less influential.
Supplemental Materials
https://doi.org/10.23641/asha.5844102

from #Audiology via ola Kala on Inoreader http://ift.tt/2DSbNZV
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader http://ift.tt/2Ea2esZ
via IFTTT

Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development

Purpose
We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant.
Method
Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded for vocal type (i.e., squeal, growl, raspberry, vowel, cry, laugh), facial affect (i.e., positive, negative, neutral), and gaze direction (i.e., to person, to mirror, or not directed).
Results
Of the 18,236 utterances analyzed, durations were typically shortest at 14 months of age and longest at 16 months of age. Statistically significant changes were observed in utterance durations across age for all variables of interest.
Conclusion
Despite variation in duration of infant utterances, developmental patterns were observed. For these infants, utterance durations appear to become more consolidated later in development, after the 1st year of life. Indeed, 12 months is often noted as the typical age of onset for 1st words and might possibly be a point in time when utterance durations begin to show patterns across communicative variables.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E7Z05L
via IFTTT

Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

Purpose
The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the target. We also assessed whether the spectral resolution of the noise-vocoded stimuli affected the presence of LRM and SRM under these conditions.
Method
In Experiment 1, a mixed factorial design was used to simultaneously manipulate the masker language (within-subject, English vs. Dutch), the simulated masker location (within-subject, right, center, left), and the spectral resolution (between-subjects, 6 vs. 12 channels) of noise-vocoded target–masker combinations presented at +25 dB signal-to-noise ratio (SNR). In Experiment 2, the study was repeated using a spectral resolution of 12 channels at +15 dB SNR.
Results
In both experiments, listeners' intelligibility of noise-vocoded targets was better when the background masker was Dutch, demonstrating reliable LRM in all conditions. The pattern of results in Experiment 1 was not reliably different across the 6- and 12-channel noise-vocoded speech. Finally, a reliable spatial benefit (SRM) was detected only in the more challenging SNR condition (Experiment 2).
Conclusion
The current study is the first to report a clear LRM benefit in noise-vocoded speech-in-speech recognition. Our results indicate that this benefit is available even under spectrally degraded conditions and that it may augment the benefit due to spatial separation of target speech and competing backgrounds.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EA2u24
via IFTTT

Lingual Pressure as a Clinical Indicator of Swallowing Function in Parkinson's Disease

Purpose
Swallowing impairment, or dysphagia, is a known contributor to reduced quality of life, pneumonia, and mortality in Parkinson's disease (PD). However, the contribution of tongue dysfunction, specifically inadequate pressure generation, to dysphagia in PD remains unclear. Our purpose was to determine whether lingual pressures in PD are (a) reduced, (b) reflect medication state, or are (c) consistent with self-reported diet and swallowing function.
Method
Twenty-eight persons with idiopathic PD (PwPD) and 28 age- and sex-matched controls completed lingual pressure tasks with the Iowa Oral Performance Instrument. PwPD were tested during practically defined ON and OFF dopaminergic medication states. Participants were also stratified into three sex- and age-matched cohorts (7 men, 5 women): (a) controls, (b) PwPD without self-reported dysphagia symptoms or diet restrictions, and (c) PwPD with self-reported dysphagia symptoms with or without diet restrictions.
Results
PwPD exhibited reduced tongue strength and used elevated proportions of tongue strength during swallowing compared with controls (p < .05) without an effect of medication state (p > .05). Reduced tongue strength distinguished PwPD with self-reported dysphagia symptoms from PwPD without reported symptoms or diet restrictions (p = .045) and controls (p = .002).
Conclusion
Tongue strength was significantly reduced in PwPD and did not differ by medication state. Tongue strength differentiated between PwPD with and without self-reported swallowing symptoms. Therefore, measures of tongue strength and swallowing pressures may serve as clinical indicators for further dysphagia evaluation and may promote early diagnosis and management of dysphagia in PD.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DVaN7m
via IFTTT

Changing the Subject: The Place of Revisions in Grammatical Development

Purpose
This article focuses on toddlers' revisions of the sentence subject and tests the hypothesis that subject diversity (i.e., the number of different subjects produced) increases the probability of subject revision.
Method
One-hour language samples were collected from 61 children (32 girls) at 27 months. Spontaneously produced, active declarative sentences (ADSs) were analyzed for subject diversity and the presence of subject revision and repetition. The number of different words produced, mean length of utterance, tense/agreement productivity score, and the number of ADSs were also measured.
Results
Regression analyses were performed with revision and repetition as the dependent variables. Subject diversity significantly predicted the probability of revision, whereas the number of ADSs predicted the probability of repetition.
Conclusion
The results support the hypothesis that subject diversity increases the probability of subject revision. It is proposed that lexical diversity within specific syntactic positions is the primary mechanism whereby revision rates increase with grammatical development. The results underscore the need to differentiate repetition from revision in the classification of disfluencies.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E1QzK0
via IFTTT

Voice, Articulation, and Prosody Contribute to Listener Perceptions of Speaker Gender: A Systematic Review and Meta-Analysis

Purpose
The aim of this study was to provide a systematic review of the aspects of verbal communication contributing to listener perceptions of speaker gender with a view to providing clinicians with guidance for the selection of the training goals when working with transsexual individuals.
Method
Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) guidelines were adopted in this systematic review. Studies evaluating the contribution of aspects of verbal communication to listener perceptions of speaker gender were rated against a new risk of bias assessment tool. Relevant data were extracted, and narrative synthesis was then conducted. Meta-analyses were conducted when appropriate data were available.
Results
Thirty-eight articles met the eligibility criteria. Meta-analysis showed speaking fundamental frequency contributing to 41.6% of the variance in gender perception. Auditory-perceptual and acoustic measures of pitch, resonance, loudness, articulation, and intonation were found to be associated with listeners' perceptions of speaker gender. Tempo and stress were not significantly associated. Mixed findings were found as to the contribution of a breathy voice quality to gender perception. Nonetheless, there exists significant risk of bias in this body of research.
Conclusions
Speech and language clinicians working with transsexual individuals may use the results of this review for goal setting. Further research is required to redress the significant risk of bias.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FEnvb7
via IFTTT

Cognitive Profiles of Finnish Preschool Children With Expressive and Receptive Language Impairment

Purpose
The aim of this study was to compare the verbal and nonverbal cognitive profiles of children with specific language impairment (SLI) with problems predominantly in expressive (SLI-E) or receptive (SLI-R) language skills. These diagnostic subgroups have not been compared before in psychological studies.
Method
Participants were preschool-age Finnish-speaking children with SLI diagnosed by a multidisciplinary team. Cognitive profile differences between the diagnostic subgroups and the relationship between verbal and nonverbal reasoning skills were evaluated.
Results
Performance was worse for the SLI-R subgroup than for the SLI-E subgroup not only in verbal reasoning and short-term memory but also in nonverbal reasoning, and several nonverbal subtests correlated significantly with the composite verbal index. However, weaknesses and strengths in the cognitive profiles of the subgroups were parallel.
Conclusions
Poor verbal comprehension and reasoning skills seem to be associated with lower nonverbal performance in children with SLI. Performance index (Performance Intelligence Quotient) may not always represent the intact nonverbal capacity assumed in SLI diagnostics, and a broader assessment is recommended when a child fails any of the compulsory Performance Intelligence Quotient subtests. Differences between the SLI subgroups appear quantitative rather than qualitative, in line with the new Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM V) classification (American Psychiatric Association, 2013).

from #Audiology via ola Kala on Inoreader http://ift.tt/2Em2uVQ
via IFTTT

A Meta-Analysis: Acoustic Measurement of Roughness and Breathiness

Purpose
Over the last 5 decades, many acoustic measures have been created to measure roughness and breathiness. The aim of this study is to present a meta-analysis of correlation coefficients (r) between auditory-perceptual judgment of roughness and breathiness and various acoustic measures in both sustained vowels and continuous speech.
Method
Scientific literature reporting perceptual–acoustic correlations on roughness and breathiness were sought in 28 databases. Weighted average correlation coefficients (r w) were calculated when multiple r-values were available for a specific acoustic marker. An r w ≥ .60 was the threshold for an acoustic measure to be considered acceptable.
Results
From 103 studies of roughness and 107 studies of breathiness that were investigated, only 33 studies and 34 studies, respectively, met the inclusion criteria of the meta-analysis on sustained vowels. Eighty-six acoustic measures were identified for roughness and 85 acoustic measures for breathiness on sustained vowels, in which 43 and 39 measures, respectively, yielded multiple r-values. Finally, only 14 measures for roughness and 12 measures for breathiness produced r w ≥ .60. On continuous speech, 4 measures for roughness and 21 measures for breathiness were identified, yielding 3 and 6 measures, respectively, with multiple r-values in which only 1 and 2, respectively, had r w ≥ .60.
Conclusion
This meta-analysis showed that only a few acoustic parameters were determined as the best estimators for roughness and breathiness.

from #Audiology via ola Kala on Inoreader http://ift.tt/2E6Oy1C
via IFTTT

Remote Microphone System Use at Home: Impact on Caregiver Talk

Purpose
The purpose of this study was to investigate the effects of home use of a remote microphone system (RMS) on the spoken language production of caregivers with young children who have hearing loss.
Method
Language Environment Analysis recorders were used with 10 families during 2 consecutive weekends (RMS weekend and No-RMS weekend). The amount of talk from a single caregiver that could be made accessible to children with hearing loss when using an RMS was estimated using Language Environment Analysis software. The total amount of caregiver talk (close and far talk) was also compared across both weekends. In addition, caregivers' perceptions of RMS use were gathered.
Results
Children, with the use of RMSs, could potentially have access to approximately 42% more words per day. In addition, although caregivers produced an equivalent number of words on both weekends, they tended to talk more from a distance when using the RMS than when not. Finally, caregivers reported positive perceived communication benefits of RMS use.
Conclusions
Findings from this investigation suggest that children with hearing loss have increased access to caregiver talk when using an RMS in the home environment. Clinical implications and future directions for research are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2EOV363
via IFTTT

Well-Being and Resilience in Children With Speech and Language Disorders

Purpose
Children with speech and language disorders are at risk in relation to psychological and social well-being. The aim of this study was to understand the experiences of these children from their own perspectives focusing on risks to their well-being and protective indicators that may promote resilience.
Method
Eleven 9- to 12-year-old children (4 boys and 7 girls) were recruited using purposeful sampling. One participant presented with a speech sound disorder, 1 presented with both a speech and language disorder, and 9 with language disorders. All were receiving additional educational supports. Narrative inquiry, a qualitative design, was employed. Data were generated in home and school settings using multiple semi-structured interviews with each child over a 6-month period. A total of 59 interviews were conducted. The data were analyzed to identify themes in relation to potential risk factors to well-being and protective strategies.
Results
Potential risk factors in relation to well-being were communication impairment and disability, difficulties with relationships, and concern about academic achievement. Potential protective strategies were hope, agency, and positive relationships.
Conclusion
This study highlights the importance of listening to children's narratives so that those at risk in relation to well-being can be identified. Conceptualization of well-being and resilience within an ecological framework may enable identification of protective strategies at both individual and environmental levels that can be strengthened to mitigate negative experiences.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DRHG8f
via IFTTT

The Effect of Remote Masking on the Reception of Speech by Young School-Age Children

Purpose
Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults.
Method
Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups.
Results
It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects.
Conclusions
In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2Ey83xO
via IFTTT

A Narrative Evaluation of Mandarin-Speaking Children With Language Impairment

Purpose
We aimed to study narrative skills in Mandarin-speaking children with language impairment (LI) to compare with children with LI speaking Indo-European languages.
Method
Eighteen Mandarin-speaking children with LI (mean age 6;2 [years;months]) and 18 typically developing (TD) age controls told 3 stories elicited using the Mandarin Expressive Narrative Test (de Villiers & Liu, 2014). We compared macrostructure-evaluating descriptions of characters, settings, initiating events, internal responses,plans, actions, and consequences. We also studied general microstructure, including productivity, lexical diversity, syntactic complexity, and grammaticality. In addition, we compared the use of 6 fine-grained microstructure elements that evaluate particular Mandarin linguistic features.
Results
Children with LI exhibited weaknesses in 5 macrostructure elements, lexical diversity, syntactic complexity, and 3 Mandarin-specific, fine-grained microstructure elements. Children with LI and TD controls demonstrated comparable performance on 2 macrostructure elements, productivity, grammaticality, and the remaining 3 fine-grained microstructure features.
Conclusions
Similarities and differences are noted in narrative profiles of children with LI who speak Mandarin versus those who speak Indo-European languages. The results are consistent with the view that profiles of linguistic deficits are shaped by the ambient language. Clinical implications are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FigjkE
via IFTTT

Masthead



from #Audiology via ola Kala on Inoreader http://ift.tt/2EvmFRY
via IFTTT

The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

Purpose
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method
We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.
Results
Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.
Conclusions
Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.
Supplemental Materials
https://doi.org/10.23641/asha.5848059

from #Audiology via ola Kala on Inoreader http://ift.tt/2Bwp1x6
via IFTTT

Spatial Release From Masking in Adults With Bilateral Cochlear Implants: Effects of Distracter Azimuth and Microphone Location

Purpose
The primary purpose of this study was to derive spatial release from masking (SRM) performance-azimuth functions for bilateral cochlear implant (CI) users to provide a thorough description of SRM as a function of target/distracter spatial configuration. The secondary purpose of this study was to investigate the effect of the microphone location for SRM in a within-subject study design.
Method
Speech recognition was measured in 12 adults with bilateral CIs for 11 spatial separations ranging from −90° to +90° in 20° steps using an adaptive block design. Five of the 12 participants were tested with both the behind-the-ear microphones and a T-mic configuration to further investigate the effect of mic location on SRM.
Results
SRM can be significantly affected by the hemifield origin of the distracter stimulus—particularly for listeners with interaural asymmetry in speech understanding. The greatest SRM was observed with a distracter positioned 50° away from the target. There was no effect of mic location on SRM for the current experimental design.
Conclusion
Our results demonstrate that the traditional assessment of SRM with a distracter positioned at 90° azimuth may underestimate maximum performance for individuals with bilateral CIs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2o3O2rV
via IFTTT

Implementation Research: Embracing Practitioners' Views

Purpose
This research explores practitioners' perspectives during the implementation of triadic gaze intervention (TGI), an evidence-based protocol for assessing and planning treatment targeting gaze as an early signal of intentional communication for young children with physical disabilities.
Method
Using qualitative methods, 7 practitioners from 1 early intervention center reported their perceptions about (a) early intervention for young children with physical disabilities, (b) acceptability and feasibility in the use of the TGI protocol in routine practice, and (c) feasibility of the TGI training. Qualitative data were gathered from 2 semistructured group interviews, once before and once after TGI training and implementation.
Results
Qualitative results documented the practitioners' reflections on recent changes to early intervention service delivery, the impact of such change on TGI adoption, and an overall strong enthusiasm for the TGI protocol, despite some need for adaptation.
Conclusion
These results are discussed relative to adapting the TGI protocol and training, when considering how to best bring about change in practice. More broadly, results highlighted the critical role of researcher–practitioner collaboration in implementation research and the value of qualitative data for gaining a richer understanding of practitioners' perspectives about the implementation process.

from #Audiology via ola Kala on Inoreader http://ift.tt/2sEznZh
via IFTTT

The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

Purpose
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method
We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.
Results
Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.
Conclusions
Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.
Supplemental Materials
https://doi.org/10.23641/asha.5848059

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Bwp1x6
via IFTTT

Spatial Release From Masking in Adults With Bilateral Cochlear Implants: Effects of Distracter Azimuth and Microphone Location

Purpose
The primary purpose of this study was to derive spatial release from masking (SRM) performance-azimuth functions for bilateral cochlear implant (CI) users to provide a thorough description of SRM as a function of target/distracter spatial configuration. The secondary purpose of this study was to investigate the effect of the microphone location for SRM in a within-subject study design.
Method
Speech recognition was measured in 12 adults with bilateral CIs for 11 spatial separations ranging from −90° to +90° in 20° steps using an adaptive block design. Five of the 12 participants were tested with both the behind-the-ear microphones and a T-mic configuration to further investigate the effect of mic location on SRM.
Results
SRM can be significantly affected by the hemifield origin of the distracter stimulus—particularly for listeners with interaural asymmetry in speech understanding. The greatest SRM was observed with a distracter positioned 50° away from the target. There was no effect of mic location on SRM for the current experimental design.
Conclusion
Our results demonstrate that the traditional assessment of SRM with a distracter positioned at 90° azimuth may underestimate maximum performance for individuals with bilateral CIs.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2o3O2rV
via IFTTT

Implementation Research: Embracing Practitioners' Views

Purpose
This research explores practitioners' perspectives during the implementation of triadic gaze intervention (TGI), an evidence-based protocol for assessing and planning treatment targeting gaze as an early signal of intentional communication for young children with physical disabilities.
Method
Using qualitative methods, 7 practitioners from 1 early intervention center reported their perceptions about (a) early intervention for young children with physical disabilities, (b) acceptability and feasibility in the use of the TGI protocol in routine practice, and (c) feasibility of the TGI training. Qualitative data were gathered from 2 semistructured group interviews, once before and once after TGI training and implementation.
Results
Qualitative results documented the practitioners' reflections on recent changes to early intervention service delivery, the impact of such change on TGI adoption, and an overall strong enthusiasm for the TGI protocol, despite some need for adaptation.
Conclusion
These results are discussed relative to adapting the TGI protocol and training, when considering how to best bring about change in practice. More broadly, results highlighted the critical role of researcher–practitioner collaboration in implementation research and the value of qualitative data for gaining a richer understanding of practitioners' perspectives about the implementation process.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2sEznZh
via IFTTT

The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

Purpose
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method
We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.
Results
Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.
Conclusions
Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.
Supplemental Materials
https://doi.org/10.23641/asha.5848059

from #Audiology via ola Kala on Inoreader http://ift.tt/2Bwp1x6
via IFTTT