Πέμπτη 15 Μαρτίου 2018

Influence of Language Load on Speech Motor Skill in Children With Specific Language Impairment

Purpose
Children with specific language impairment (SLI) show particular deficits in the generation of sequenced action: the quintessential procedural task. Practiced imitation of a sequence may become rote and require reduced procedural memory. This study explored whether speech motor deficits in children with SLI occur generally or only in conditions of high linguistic load, whether speech motor deficits diminish with practice, and whether it is beneficial to incorporate conditions of high load to understand speech production.
Method
Children with SLI and typical development participated in a syntactic priming task during which they generated sentences (high linguistic load) and, then, practiced repeating a sentence (low load) across 3 sessions. We assessed phonetic accuracy, speech movement variability, and duration.
Results
Children with SLI produced more variable articulatory movements than peers with typical development in the high load condition. The groups converged in the low load condition. Children with SLI continued to show increased articulatory stability over 3 practice sessions. Both groups produced generated sentences with increased duration and variability compared with repeated sentences.
Conclusions
Linguistic demands influence speech motor production. Children with SLI show reduced speech motor performance in tasks that require language generation but not when task demands are reduced in rote practice.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FEiEev
via IFTTT

Acoustic Predictors of Pediatric Dysarthria in Cerebral Palsy

Purpose
The objectives of this study were to identify acoustic characteristics of connected speech that differentiate children with dysarthria secondary to cerebral palsy (CP) from typically developing children and to identify acoustic measures that best detect dysarthria in children with CP.
Method
Twenty 5-year-old children with dysarthria secondary to CP were compared to 20 age- and sex-matched typically developing children on 5 acoustic measures of connected speech. A logistic regression approach was used to derive an acoustic model that best predicted dysarthria status.
Results
Results indicated that children with dysarthria secondary to CP differed from typically developing children on measures of multiple segmental and suprasegmental speech characteristics. An acoustic model containing articulation rate and the F2 range of diphthongs differentiated children with dysarthria from typically developing children with 87.5% accuracy.
Conclusion
This study serves as a first step toward developing an acoustic model that can be used to improve early identification of dysarthria in children with CP.

from #Audiology via ola Kala on Inoreader http://ift.tt/2Dv9dIy
via IFTTT

Vocalization Subsystem Responses to a Temporarily Induced Unilateral Vocal Fold Paralysis

Purpose
The purpose of this study is to quantify the interactions of the 3 vocalization subsystems of respiration, phonation, and resonance before, during, and after a perturbation to the larynx (temporarily induced unilateral vocal fold paralysis) in 10 vocally healthy participants. Using dynamic systems theory as a guide, we hypothesized that data groupings would emerge revealing context-dependent patterns in the relationships of variables representing the 3 vocalization subsystems. We also hypothesized that group data would mask important individual variability important to understanding the relationships among the vocalization subsystems.
Method
A perturbation paradigm was used to obtain respiratory kinematic, aerodynamic, and acoustic formant measures from 10 healthy participants (8 women, 2 men) with normal voices. Group and individual data were analyzed to provide a multilevel analysis of the data. A 3-dimensional state space model was constructed to demonstrate the interactive relationships among the 3 subsystems before, during, and after perturbation.
Results
During perturbation, group data revealed that lung volume initiations and terminations were lower, with longer respiratory excursions; airflow rates increased while subglottic pressures were maintained. Acoustic formant measures indicated that the spacing between the upper formants decreased (F3–F5), whereas the spacing between F1 and F2 increased. State space modeling revealed the changing directionality and interactions among the 3 subsystems.
Conclusions
Group data alone masked important variability necessary to understand the unique relationships among the 3 subsystems. Multilevel analysis permitted a richer understanding of the individual differences in phonatory regulation and permitted subgroup analysis. Dynamic systems theory may be a useful heuristic to model the interactive relationships among vocalization subsystems.
Supplemental Material
https://doi.org/10.23641/asha.5913532

from #Audiology via ola Kala on Inoreader http://ift.tt/2GxWtE2
via IFTTT

The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

Purpose
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method
We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.
Results
Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.
Conclusions
Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.
Supplemental Materials
https://doi.org/10.23641/asha.5848059

from #Audiology via ola Kala on Inoreader http://ift.tt/2FLQryq
via IFTTT

Pitch and Time Processing in Speech and Tones: The Effects of Musical Training and Attention

Purpose
Musical training is often linked to enhanced auditory discrimination, but the relative roles of pitch and time in music and speech are unclear. Moreover, it is unclear whether pitch and time processing are correlated across individuals and how they may be affected by attention. This study aimed to examine pitch and time processing in speech and tone sequences, taking musical training and attention into account.
Method
Musicians (16) and nonmusicians (16) were asked to detect pitch or timing changes in speech and tone sequences and make a binary response. In some conditions, the participants were focused on 1 aspect of the stimulus (directed attention), and in others, they had to pay attention to all aspects at once (divided attention).
Results
As expected, musicians performed better overall. Performance scores on pitch and time tasks were correlated, as were performance scores for speech and tonal stimuli, but most markedly in musicians. All participants performed better on the directed versus divided attention task, but again, musicians performed better than nonmusicians.
Conclusion
In general, this experiment shows that individuals with a better sense of pitch discrimination also have a better sense of timing discrimination in the auditory domain. In addition, although musicians perform better overall, these results do not support the idea that musicians have an added advantage for divided attention tasks. These findings serve to better understand how musical training and attention affect pitch and time processing in the context of speech and tones and may have applications in special populations.
Supplemental Material
https://doi.org/10.23641/asha.5895997

from #Audiology via ola Kala on Inoreader http://ift.tt/2FCVqoV
via IFTTT

Implementation Research: Embracing Practitioners' Views

Purpose
This research explores practitioners' perspectives during the implementation of triadic gaze intervention (TGI), an evidence-based protocol for assessing and planning treatment targeting gaze as an early signal of intentional communication for young children with physical disabilities.
Method
Using qualitative methods, 7 practitioners from 1 early intervention center reported their perceptions about (a) early intervention for young children with physical disabilities, (b) acceptability and feasibility in the use of the TGI protocol in routine practice, and (c) feasibility of the TGI training. Qualitative data were gathered from 2 semistructured group interviews, once before and once after TGI training and implementation.
Results
Qualitative results documented the practitioners' reflections on recent changes to early intervention service delivery, the impact of such change on TGI adoption, and an overall strong enthusiasm for the TGI protocol, despite some need for adaptation.
Conclusion
These results are discussed relative to adapting the TGI protocol and training, when considering how to best bring about change in practice. More broadly, results highlighted the critical role of researcher–practitioner collaboration in implementation research and the value of qualitative data for gaining a richer understanding of practitioners' perspectives about the implementation process.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DvvGVX
via IFTTT

Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease

Purpose
The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility.
Method
Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility.
Results
Overall, lower-frequency STN-DBS stimulation (60 Hz) was found to lead to improvements in intelligibility and acoustic vowel expansion. An interaction between speaker sex and STN-DBS stimulation was found for vowel measures. The combination of low frequency, mid to high voltage, and low to mid pulse width led to optimal speech outcomes; however, these settings did not demonstrate significant speech outcome differences compared with the standard clinical STN-DBS settings, likely due to substantial individual variability.
Conclusions
Although lower-frequency STN-DBS stimulation was found to yield consistent improvements in speech outcomes, it was not found to necessarily lead to the best speech outcomes for all participants. Nevertheless, frequency may serve as a starting point to explore settings that will optimize an individual's speech outcomes following STN-DBS surgery.
Supplemental Material
https://doi.org/10.23641/asha.5899228

from #Audiology via ola Kala on Inoreader http://ift.tt/2GwCvtg
via IFTTT

Targeting Complex Sentences in Older School Children With Specific Language Impairment: Results From an Early-Phase Treatment Study

Purpose
This study investigated the effects of a complex sentence treatment at 2 dosage levels on language performance of 30 school-age children ages 10–14 years with specific language impairment.
Method
Three types of complex sentences (adverbial, object complement, relative) were taught in sequence in once or twice weekly dosage conditions. Outcome measures included sentence probes administered at baseline, treatment, and posttreatment phases and comparisons of pre–post performance on oral and written language tests and tasks. Relationships between pretest variables and treatment outcomes were also explored.
Results
Treatment was effective at improving performance on the sentence probes for the majority of participants; however, results differed by sentence type, with the largest effect sizes for adverbial and relative clauses. Significant and clinically meaningful pre–post treatment gains were found on a comprehensive oral language test, but not on reading and writing measures. There was no treatment advantage for the higher dosage group. Several significant correlations indicated a relationship between lower pretest scores and higher outcome measures.
Conclusions
Results suggest that a focused intervention can produce improvements in complex sentence productions of older school children with language impairment. Future research should explore ways to maximize gains and extend impact to natural language contexts.
Supplemental Material
https://doi.org/10.23641/asha.5923318

from #Audiology via ola Kala on Inoreader http://ift.tt/2Du5k6D
via IFTTT

Dysarthria in Mandarin-Speaking Children With Cerebral Palsy: Speech Subsystem Profiles

Purpose
This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences.
Method
Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach. Acoustic and perceptual variables reflecting 3 speech subsystems (articulatory-phonetic, phonatory, and prosodic), and speech intelligibility, were measured based on speech samples obtained from the Test of Children's Speech Intelligibility in Mandarin (developed in the lab for the purpose of this research).
Results
The CP and TD children differed in several aspects of speech subsystem function. Speech intelligibility scores in children with CP were influenced by all 3 speech subsystems, but articulatory-phonetic variables had the highest correlation with word intelligibility. All 3 subsystems influenced sentence intelligibility.
Conclusion
Children with CP demonstrated deficits in speech intelligibility and articulation compared with TD children. Better speech sound articulation influenced higher word intelligibility, but did not benefit sentence intelligibility.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FDZmpN
via IFTTT

Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users

Purpose
The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions.
Method
First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables.
Results
The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average.
Conclusions
We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FLRLBw
via IFTTT

Development of Velopharyngeal Closure for Vocalization During the First 2 Years of Life

Purpose
The vocalizations of young infants often sound nasalized, suggesting that the velopharynx is open during the 1st few months of life. Whereas acoustic and perceptual studies seemed to support the idea that the velopharynx closes for vocalization by about 4 months of age, an aeromechanical study contradicted this (Thom, Hoit, Hixon, & Smith, 2006). Thus, the current large-scale investigation was undertaken to determine when the velopharynx closes for speech production by following infants during their first 2 years of life.
Method
This longitudinal study used nasal ram pressure to determine the status of the velopharynx (open or closed) during spontaneous speech production in 92 participants (46 male, 46 female) studied monthly from age 4 to 24 months.
Results
The velopharynx was closed during at least 90% of the utterances by 19 months, though there was substantial variability across participants. When considered by sound category, the velopharynx was closed from most to least often during production of oral obstruents, approximants, vowels (only), and glottal obstruents. No sex effects were observed.
Conclusion
Velopharyngeal closure for spontaneous speech production can be considered complete by 19 months, but closure occurs earlier for speech sounds with higher oral pressure demands.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FHaZMA
via IFTTT

Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech

Purpose
Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to improve their accuracy and reproducibility.
Method
Basic information is put together from standards, technical, voice and speech literature, and practical experience of the authors and is explained for nontechnical readers.
Results
Variation of SPL with distance, sound level meters and their accuracy, frequency and time weightings, and background noise topics are reviewed. Several calibration procedures for SPL measurements are described for stand-mounted and head-mounted microphones.
Conclusions
SPL of voice and speech should be reported together with the mouth-to-microphone distance so that the levels can be related to vocal power. Sound level measurement settings (i.e., frequency weighting and time weighting/averaging) should always be specified. Classified sound level meters should be used to assure measurement accuracy. Head-mounted microphones placed at the proximity of the mouth improve signal-to-noise ratio and can be taken advantage of for voice SPL measurements when calibrated. Background noise levels should be reported besides the sound levels of voice and speech.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DvKKTd
via IFTTT

Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

Purpose
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data.
Method
We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile–quantile plots.
Results
We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not.
Conclusions
We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

from #Audiology via ola Kala on Inoreader http://ift.tt/2GvXCvR
via IFTTT

What Does a Cue Do? Comparing Phonological and Semantic Cues for Picture Naming in Aphasia

Purpose
Impaired naming is one of the most common symptoms in aphasia, often treated with cued picture naming paradigms. It has been argued that semantic cues facilitate the reliable categorization of the picture, and phonological cues facilitate the retrieval of target phonology. To test these hypotheses, we compared the effectiveness of phonological and semantic cues in picture naming for a group of individuals with aphasia. To establish the locus of effective cueing, we also tested whether cue type interacted with lexical and image properties of the targets.
Method
Individuals with aphasia (n = 10) were tested with a within-subject design. They named a large set of items (n = 175) 4 times. Each presentation of the items was accompanied by a different cueing condition (phonological, semantic, nonassociated word and tone). Item level variables for the targets (i.e., phoneme length, frequency, imageability, name agreement, and visual complexity) were used to test the interaction of cue type and item variables. Naming accuracy data were analyzed using generalized linear mixed effects models.
Results
Phonological cues were more effective than semantic cues, improving accuracy across individuals. However, phonological cues did not interact with phonological or lexical aspects of the picture names (e.g., phoneme length, frequency). Instead, they interacted with properties of the picture itself (i.e., visual complexity), such that phonological cues improved naming accuracy for items with low visual complexity.
Conclusions
The findings challenge the theoretical assumptions that phonological cues map to phonological processes. Instead, phonological information benefits the earliest stages of picture recognition, aiding the initial categorization of the target. The data help to explain why patterns of cueing are not consistent in aphasia; that is, it is not the case that phonological impairments always benefit from phonological cues and semantic impairments form semantic cues. A substantial amount of the literature in naming therapy focuses on picture naming paradigms. Therefore, the results are also critically important for rehabilitation, allowing for therapy development to be more rooted in the true mechanisms through which cues are processed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FMDhRV
via IFTTT

Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

Purpose
Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with language impairment, speech delay, and typically developing peers.
Method
Speech perception was measured by discrimination of synthesized speech syllable continua that varied in frequency (/dɑ/–/ɡɑ/). Groups were classified by performance on speech and language assessments and compared on syllable discrimination thresholds. Within-group variability was also evaluated.
Results
Children with CAS without language impairment did not significantly differ in syllable discrimination compared to typically developing peers. In contrast, those with CAS and language impairment showed significantly poorer syllable discrimination abilities compared to children with CAS only and typically developing peers. Children with speech delay and language impairment also showed significantly poorer discrimination abilities, with appreciable within-group variability.
Conclusions
These findings suggest that speech perception deficits are not a core feature of CAS but rather occur with co-occurring language impairment in a subset of children with CAS. This study establishes the significance of accounting for language ability in children with CAS.
Supplemental Materials
https://doi.org/10.23641/asha.5848056

from #Audiology via ola Kala on Inoreader http://ift.tt/2FFIZJn
via IFTTT

Masked Repetition Priming Treatment for Anomia

Purpose
Masked priming has been suggested as a way to directly target implicit lexical retrieval processes in aphasia. This study was designed to investigate repeated use of masked repetition priming to improve picture naming in individuals with anomia due to aphasia.
Method
A single-subject, multiple-baseline design was used across 6 people with aphasia. Training involved repeated exposure to pictures that were paired with masked identity primes or sham primes. Two semantic categories were trained in series for each participant. Analyses assessed treatment effects, generalization within and across semantic categories, and effects on broader language skills, immediately and 3 months after treatment.
Results
Four of the 6 participants improved in naming trained items immediately after treatment. Improvements were generally greater for items that were presented in training with masked identity primes than items that were presented repeatedly during training with masked sham primes. Generalization within and across semantic categories was limited. Generalization to broader language skills was inconsistent.
Conclusion
Masked repetition priming may improve naming for some individuals with anomia due to aphasia. A number of methodological and theoretical insights into further development of this treatment approach are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FJGmSt
via IFTTT

Speech Adaptation to Kinematic Recording Sensors: Perceptual and Acoustic Findings

Purpose
This study used perceptual and acoustic measures to examine the time course of speech adaptation after the attachment of electromagnetic sensor coils to the tongue, lips, and jaw.
Method
Twenty native English speakers read aloud stimulus sentences before the attachment of the sensors, immediately after attachment, and again 5, 10, 15, and 20 min later. They read aloud continuously between recordings to encourage adaptation. Sentence recordings were perceptually evaluated by 20 native English listeners, who rated 150 stimuli (which included 31 samples that were repeated to assess rater reliability) using a visual analog scale with the end points labeled as “precise” and “imprecise.” Acoustic analysis began by segmenting and measuring the duration of the fricatives /s/ and /ʃ/ as well as the whole sentence. The spectral center of gravity and spectral standard deviation of the 2 fricatives were measured using Praat. These phonetic targets were selected because the standard placement of sensor coils on the lingual surface was anticipated to interfere with normal fricative production, causing them to become distorted.
Results
Perceptual ratings revealed a decrease in speech precision after sensor attachment and evidence of adaptation over time; there was little perceptual change beyond the 10-min recording. The spectral center of gravity for /s/ decreased, and the spectral standard deviation for /ʃ/ increased after sensor attachment, but the acoustic measures showed no evidence of adaptation over time.
Conclusion
The findings suggest that 10 min may be sufficient time to allow speakers to adapt before experimental data collection with Northern Digital Instruments Wave electromagnetic sensors.

from #Audiology via ola Kala on Inoreader http://ift.tt/2GujIyF
via IFTTT

An Initial Investigation of the Neural Correlates of Word Processing in Preschoolers With Specific Language Impairment

Purpose
Previous behavioral studies have found deficits in lexical–semantic abilities in children with specific language impairment (SLI), including reduced depth and breadth of word knowledge. This study explored the neural correlates of early emerging familiar word processing in preschoolers with SLI and typical development.
Method
Fifteen preschoolers with typical development and 15 preschoolers with SLI were presented with pictures followed after a brief delay by an auditory label that did or did not match. Event-related brain potentials were time locked to the onset of the auditory labels. Children provided verbal judgments of whether the label matched the picture.
Results
There were no group differences in the accuracy of identifying when pictures and labels matched or mismatched. Event-related brain potential data revealed that mismatch trials elicited a robust N400 in both groups, with no group differences in mean amplitude or peak latency. However, the typically developing group demonstrated a more robust late positive component, elicited by mismatch trials.
Conclusions
These initial findings indicate that lexical–semantic access of early acquired words, indexed by the N400, does not differ between preschoolers with SLI and typical development when highly familiar words are presented in isolation. However, the typically developing group demonstrated a more mature profile of postlexical reanalysis and integration, indexed by an emerging late positive component. The findings lay the necessary groundwork for better understanding processing of newly learned words in children with SLI.

from #Audiology via ola Kala on Inoreader http://ift.tt/2Dv98EK
via IFTTT

Metapragmatic Explicitation and Social Attribution in Social Communication Disorder and Developmental Language Disorder: A Comparative Study

Purpose
The purposes of this study are to investigate metapragmatic (MP) ability in 6–11-year-old children with social communication disorder (SCD), developmental language disorder (DLD), and typical language development and to explore factors associated with MP explicitation and social understanding (SU).
Method
In this cross-sectional study, all participants (N = 82) completed an experimental task, the Assessment of Metapragmatics (Collins et al., 2014), in which pragmatic errors are identified in filmed interactions. Responses were scored for complexity/type of explicitation (MP score) and attribution of social characteristics to the films' characters (SU score).
Results
Groups with SCD and DLD had significantly lower MP scores and less sophisticated explicitation than the group with typical language development. After controlling for language and age, the group with SCD had significantly lower SU scores than the group with DLD. Significant correlations were found between MP scores and age/language ability but not with pragmatic impairment.
Conclusions
Children with SCD or DLD performed poorly on an MP task compared with children who are typically developing but do not differ from each other in ability to reflect verbally on pragmatic features in interactions. MP ability appears to be closely related to structural language ability. The limited ability of children with SCD to attribute social/psychological states to interlocutors may indicate additional social attribution limitations.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FHrE2k
via IFTTT

Influence of Language Load on Speech Motor Skill in Children With Specific Language Impairment

Purpose
Children with specific language impairment (SLI) show particular deficits in the generation of sequenced action: the quintessential procedural task. Practiced imitation of a sequence may become rote and require reduced procedural memory. This study explored whether speech motor deficits in children with SLI occur generally or only in conditions of high linguistic load, whether speech motor deficits diminish with practice, and whether it is beneficial to incorporate conditions of high load to understand speech production.
Method
Children with SLI and typical development participated in a syntactic priming task during which they generated sentences (high linguistic load) and, then, practiced repeating a sentence (low load) across 3 sessions. We assessed phonetic accuracy, speech movement variability, and duration.
Results
Children with SLI produced more variable articulatory movements than peers with typical development in the high load condition. The groups converged in the low load condition. Children with SLI continued to show increased articulatory stability over 3 practice sessions. Both groups produced generated sentences with increased duration and variability compared with repeated sentences.
Conclusions
Linguistic demands influence speech motor production. Children with SLI show reduced speech motor performance in tasks that require language generation but not when task demands are reduced in rote practice.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FEiEev
via IFTTT

Acoustic Predictors of Pediatric Dysarthria in Cerebral Palsy

Purpose
The objectives of this study were to identify acoustic characteristics of connected speech that differentiate children with dysarthria secondary to cerebral palsy (CP) from typically developing children and to identify acoustic measures that best detect dysarthria in children with CP.
Method
Twenty 5-year-old children with dysarthria secondary to CP were compared to 20 age- and sex-matched typically developing children on 5 acoustic measures of connected speech. A logistic regression approach was used to derive an acoustic model that best predicted dysarthria status.
Results
Results indicated that children with dysarthria secondary to CP differed from typically developing children on measures of multiple segmental and suprasegmental speech characteristics. An acoustic model containing articulation rate and the F2 range of diphthongs differentiated children with dysarthria from typically developing children with 87.5% accuracy.
Conclusion
This study serves as a first step toward developing an acoustic model that can be used to improve early identification of dysarthria in children with CP.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Dv9dIy
via IFTTT

Vocalization Subsystem Responses to a Temporarily Induced Unilateral Vocal Fold Paralysis

Purpose
The purpose of this study is to quantify the interactions of the 3 vocalization subsystems of respiration, phonation, and resonance before, during, and after a perturbation to the larynx (temporarily induced unilateral vocal fold paralysis) in 10 vocally healthy participants. Using dynamic systems theory as a guide, we hypothesized that data groupings would emerge revealing context-dependent patterns in the relationships of variables representing the 3 vocalization subsystems. We also hypothesized that group data would mask important individual variability important to understanding the relationships among the vocalization subsystems.
Method
A perturbation paradigm was used to obtain respiratory kinematic, aerodynamic, and acoustic formant measures from 10 healthy participants (8 women, 2 men) with normal voices. Group and individual data were analyzed to provide a multilevel analysis of the data. A 3-dimensional state space model was constructed to demonstrate the interactive relationships among the 3 subsystems before, during, and after perturbation.
Results
During perturbation, group data revealed that lung volume initiations and terminations were lower, with longer respiratory excursions; airflow rates increased while subglottic pressures were maintained. Acoustic formant measures indicated that the spacing between the upper formants decreased (F3–F5), whereas the spacing between F1 and F2 increased. State space modeling revealed the changing directionality and interactions among the 3 subsystems.
Conclusions
Group data alone masked important variability necessary to understand the unique relationships among the 3 subsystems. Multilevel analysis permitted a richer understanding of the individual differences in phonatory regulation and permitted subgroup analysis. Dynamic systems theory may be a useful heuristic to model the interactive relationships among vocalization subsystems.
Supplemental Material
https://doi.org/10.23641/asha.5913532

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GxWtE2
via IFTTT

The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

Purpose
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method
We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.
Results
Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.
Conclusions
Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.
Supplemental Materials
https://doi.org/10.23641/asha.5848059

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FLQryq
via IFTTT

Pitch and Time Processing in Speech and Tones: The Effects of Musical Training and Attention

Purpose
Musical training is often linked to enhanced auditory discrimination, but the relative roles of pitch and time in music and speech are unclear. Moreover, it is unclear whether pitch and time processing are correlated across individuals and how they may be affected by attention. This study aimed to examine pitch and time processing in speech and tone sequences, taking musical training and attention into account.
Method
Musicians (16) and nonmusicians (16) were asked to detect pitch or timing changes in speech and tone sequences and make a binary response. In some conditions, the participants were focused on 1 aspect of the stimulus (directed attention), and in others, they had to pay attention to all aspects at once (divided attention).
Results
As expected, musicians performed better overall. Performance scores on pitch and time tasks were correlated, as were performance scores for speech and tonal stimuli, but most markedly in musicians. All participants performed better on the directed versus divided attention task, but again, musicians performed better than nonmusicians.
Conclusion
In general, this experiment shows that individuals with a better sense of pitch discrimination also have a better sense of timing discrimination in the auditory domain. In addition, although musicians perform better overall, these results do not support the idea that musicians have an added advantage for divided attention tasks. These findings serve to better understand how musical training and attention affect pitch and time processing in the context of speech and tones and may have applications in special populations.
Supplemental Material
https://doi.org/10.23641/asha.5895997

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FCVqoV
via IFTTT

Implementation Research: Embracing Practitioners' Views

Purpose
This research explores practitioners' perspectives during the implementation of triadic gaze intervention (TGI), an evidence-based protocol for assessing and planning treatment targeting gaze as an early signal of intentional communication for young children with physical disabilities.
Method
Using qualitative methods, 7 practitioners from 1 early intervention center reported their perceptions about (a) early intervention for young children with physical disabilities, (b) acceptability and feasibility in the use of the TGI protocol in routine practice, and (c) feasibility of the TGI training. Qualitative data were gathered from 2 semistructured group interviews, once before and once after TGI training and implementation.
Results
Qualitative results documented the practitioners' reflections on recent changes to early intervention service delivery, the impact of such change on TGI adoption, and an overall strong enthusiasm for the TGI protocol, despite some need for adaptation.
Conclusion
These results are discussed relative to adapting the TGI protocol and training, when considering how to best bring about change in practice. More broadly, results highlighted the critical role of researcher–practitioner collaboration in implementation research and the value of qualitative data for gaining a richer understanding of practitioners' perspectives about the implementation process.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DvvGVX
via IFTTT

Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease

Purpose
The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility.
Method
Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility.
Results
Overall, lower-frequency STN-DBS stimulation (60 Hz) was found to lead to improvements in intelligibility and acoustic vowel expansion. An interaction between speaker sex and STN-DBS stimulation was found for vowel measures. The combination of low frequency, mid to high voltage, and low to mid pulse width led to optimal speech outcomes; however, these settings did not demonstrate significant speech outcome differences compared with the standard clinical STN-DBS settings, likely due to substantial individual variability.
Conclusions
Although lower-frequency STN-DBS stimulation was found to yield consistent improvements in speech outcomes, it was not found to necessarily lead to the best speech outcomes for all participants. Nevertheless, frequency may serve as a starting point to explore settings that will optimize an individual's speech outcomes following STN-DBS surgery.
Supplemental Material
https://doi.org/10.23641/asha.5899228

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GwCvtg
via IFTTT

Targeting Complex Sentences in Older School Children With Specific Language Impairment: Results From an Early-Phase Treatment Study

Purpose
This study investigated the effects of a complex sentence treatment at 2 dosage levels on language performance of 30 school-age children ages 10–14 years with specific language impairment.
Method
Three types of complex sentences (adverbial, object complement, relative) were taught in sequence in once or twice weekly dosage conditions. Outcome measures included sentence probes administered at baseline, treatment, and posttreatment phases and comparisons of pre–post performance on oral and written language tests and tasks. Relationships between pretest variables and treatment outcomes were also explored.
Results
Treatment was effective at improving performance on the sentence probes for the majority of participants; however, results differed by sentence type, with the largest effect sizes for adverbial and relative clauses. Significant and clinically meaningful pre–post treatment gains were found on a comprehensive oral language test, but not on reading and writing measures. There was no treatment advantage for the higher dosage group. Several significant correlations indicated a relationship between lower pretest scores and higher outcome measures.
Conclusions
Results suggest that a focused intervention can produce improvements in complex sentence productions of older school children with language impairment. Future research should explore ways to maximize gains and extend impact to natural language contexts.
Supplemental Material
https://doi.org/10.23641/asha.5923318

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Du5k6D
via IFTTT

Dysarthria in Mandarin-Speaking Children With Cerebral Palsy: Speech Subsystem Profiles

Purpose
This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences.
Method
Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach. Acoustic and perceptual variables reflecting 3 speech subsystems (articulatory-phonetic, phonatory, and prosodic), and speech intelligibility, were measured based on speech samples obtained from the Test of Children's Speech Intelligibility in Mandarin (developed in the lab for the purpose of this research).
Results
The CP and TD children differed in several aspects of speech subsystem function. Speech intelligibility scores in children with CP were influenced by all 3 speech subsystems, but articulatory-phonetic variables had the highest correlation with word intelligibility. All 3 subsystems influenced sentence intelligibility.
Conclusion
Children with CP demonstrated deficits in speech intelligibility and articulation compared with TD children. Better speech sound articulation influenced higher word intelligibility, but did not benefit sentence intelligibility.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FDZmpN
via IFTTT

Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users

Purpose
The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions.
Method
First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables.
Results
The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average.
Conclusions
We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FLRLBw
via IFTTT

Development of Velopharyngeal Closure for Vocalization During the First 2 Years of Life

Purpose
The vocalizations of young infants often sound nasalized, suggesting that the velopharynx is open during the 1st few months of life. Whereas acoustic and perceptual studies seemed to support the idea that the velopharynx closes for vocalization by about 4 months of age, an aeromechanical study contradicted this (Thom, Hoit, Hixon, & Smith, 2006). Thus, the current large-scale investigation was undertaken to determine when the velopharynx closes for speech production by following infants during their first 2 years of life.
Method
This longitudinal study used nasal ram pressure to determine the status of the velopharynx (open or closed) during spontaneous speech production in 92 participants (46 male, 46 female) studied monthly from age 4 to 24 months.
Results
The velopharynx was closed during at least 90% of the utterances by 19 months, though there was substantial variability across participants. When considered by sound category, the velopharynx was closed from most to least often during production of oral obstruents, approximants, vowels (only), and glottal obstruents. No sex effects were observed.
Conclusion
Velopharyngeal closure for spontaneous speech production can be considered complete by 19 months, but closure occurs earlier for speech sounds with higher oral pressure demands.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FHaZMA
via IFTTT

Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech

Purpose
Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to improve their accuracy and reproducibility.
Method
Basic information is put together from standards, technical, voice and speech literature, and practical experience of the authors and is explained for nontechnical readers.
Results
Variation of SPL with distance, sound level meters and their accuracy, frequency and time weightings, and background noise topics are reviewed. Several calibration procedures for SPL measurements are described for stand-mounted and head-mounted microphones.
Conclusions
SPL of voice and speech should be reported together with the mouth-to-microphone distance so that the levels can be related to vocal power. Sound level measurement settings (i.e., frequency weighting and time weighting/averaging) should always be specified. Classified sound level meters should be used to assure measurement accuracy. Head-mounted microphones placed at the proximity of the mouth improve signal-to-noise ratio and can be taken advantage of for voice SPL measurements when calibrated. Background noise levels should be reported besides the sound levels of voice and speech.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2DvKKTd
via IFTTT

Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

Purpose
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data.
Method
We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile–quantile plots.
Results
We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not.
Conclusions
We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GvXCvR
via IFTTT

What Does a Cue Do? Comparing Phonological and Semantic Cues for Picture Naming in Aphasia

Purpose
Impaired naming is one of the most common symptoms in aphasia, often treated with cued picture naming paradigms. It has been argued that semantic cues facilitate the reliable categorization of the picture, and phonological cues facilitate the retrieval of target phonology. To test these hypotheses, we compared the effectiveness of phonological and semantic cues in picture naming for a group of individuals with aphasia. To establish the locus of effective cueing, we also tested whether cue type interacted with lexical and image properties of the targets.
Method
Individuals with aphasia (n = 10) were tested with a within-subject design. They named a large set of items (n = 175) 4 times. Each presentation of the items was accompanied by a different cueing condition (phonological, semantic, nonassociated word and tone). Item level variables for the targets (i.e., phoneme length, frequency, imageability, name agreement, and visual complexity) were used to test the interaction of cue type and item variables. Naming accuracy data were analyzed using generalized linear mixed effects models.
Results
Phonological cues were more effective than semantic cues, improving accuracy across individuals. However, phonological cues did not interact with phonological or lexical aspects of the picture names (e.g., phoneme length, frequency). Instead, they interacted with properties of the picture itself (i.e., visual complexity), such that phonological cues improved naming accuracy for items with low visual complexity.
Conclusions
The findings challenge the theoretical assumptions that phonological cues map to phonological processes. Instead, phonological information benefits the earliest stages of picture recognition, aiding the initial categorization of the target. The data help to explain why patterns of cueing are not consistent in aphasia; that is, it is not the case that phonological impairments always benefit from phonological cues and semantic impairments form semantic cues. A substantial amount of the literature in naming therapy focuses on picture naming paradigms. Therefore, the results are also critically important for rehabilitation, allowing for therapy development to be more rooted in the true mechanisms through which cues are processed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FMDhRV
via IFTTT

Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

Purpose
Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with language impairment, speech delay, and typically developing peers.
Method
Speech perception was measured by discrimination of synthesized speech syllable continua that varied in frequency (/dɑ/–/ɡɑ/). Groups were classified by performance on speech and language assessments and compared on syllable discrimination thresholds. Within-group variability was also evaluated.
Results
Children with CAS without language impairment did not significantly differ in syllable discrimination compared to typically developing peers. In contrast, those with CAS and language impairment showed significantly poorer syllable discrimination abilities compared to children with CAS only and typically developing peers. Children with speech delay and language impairment also showed significantly poorer discrimination abilities, with appreciable within-group variability.
Conclusions
These findings suggest that speech perception deficits are not a core feature of CAS but rather occur with co-occurring language impairment in a subset of children with CAS. This study establishes the significance of accounting for language ability in children with CAS.
Supplemental Materials
https://doi.org/10.23641/asha.5848056

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FFIZJn
via IFTTT

Masked Repetition Priming Treatment for Anomia

Purpose
Masked priming has been suggested as a way to directly target implicit lexical retrieval processes in aphasia. This study was designed to investigate repeated use of masked repetition priming to improve picture naming in individuals with anomia due to aphasia.
Method
A single-subject, multiple-baseline design was used across 6 people with aphasia. Training involved repeated exposure to pictures that were paired with masked identity primes or sham primes. Two semantic categories were trained in series for each participant. Analyses assessed treatment effects, generalization within and across semantic categories, and effects on broader language skills, immediately and 3 months after treatment.
Results
Four of the 6 participants improved in naming trained items immediately after treatment. Improvements were generally greater for items that were presented in training with masked identity primes than items that were presented repeatedly during training with masked sham primes. Generalization within and across semantic categories was limited. Generalization to broader language skills was inconsistent.
Conclusion
Masked repetition priming may improve naming for some individuals with anomia due to aphasia. A number of methodological and theoretical insights into further development of this treatment approach are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FJGmSt
via IFTTT

Speech Adaptation to Kinematic Recording Sensors: Perceptual and Acoustic Findings

Purpose
This study used perceptual and acoustic measures to examine the time course of speech adaptation after the attachment of electromagnetic sensor coils to the tongue, lips, and jaw.
Method
Twenty native English speakers read aloud stimulus sentences before the attachment of the sensors, immediately after attachment, and again 5, 10, 15, and 20 min later. They read aloud continuously between recordings to encourage adaptation. Sentence recordings were perceptually evaluated by 20 native English listeners, who rated 150 stimuli (which included 31 samples that were repeated to assess rater reliability) using a visual analog scale with the end points labeled as “precise” and “imprecise.” Acoustic analysis began by segmenting and measuring the duration of the fricatives /s/ and /ʃ/ as well as the whole sentence. The spectral center of gravity and spectral standard deviation of the 2 fricatives were measured using Praat. These phonetic targets were selected because the standard placement of sensor coils on the lingual surface was anticipated to interfere with normal fricative production, causing them to become distorted.
Results
Perceptual ratings revealed a decrease in speech precision after sensor attachment and evidence of adaptation over time; there was little perceptual change beyond the 10-min recording. The spectral center of gravity for /s/ decreased, and the spectral standard deviation for /ʃ/ increased after sensor attachment, but the acoustic measures showed no evidence of adaptation over time.
Conclusion
The findings suggest that 10 min may be sufficient time to allow speakers to adapt before experimental data collection with Northern Digital Instruments Wave electromagnetic sensors.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2GujIyF
via IFTTT

An Initial Investigation of the Neural Correlates of Word Processing in Preschoolers With Specific Language Impairment

Purpose
Previous behavioral studies have found deficits in lexical–semantic abilities in children with specific language impairment (SLI), including reduced depth and breadth of word knowledge. This study explored the neural correlates of early emerging familiar word processing in preschoolers with SLI and typical development.
Method
Fifteen preschoolers with typical development and 15 preschoolers with SLI were presented with pictures followed after a brief delay by an auditory label that did or did not match. Event-related brain potentials were time locked to the onset of the auditory labels. Children provided verbal judgments of whether the label matched the picture.
Results
There were no group differences in the accuracy of identifying when pictures and labels matched or mismatched. Event-related brain potential data revealed that mismatch trials elicited a robust N400 in both groups, with no group differences in mean amplitude or peak latency. However, the typically developing group demonstrated a more robust late positive component, elicited by mismatch trials.
Conclusions
These initial findings indicate that lexical–semantic access of early acquired words, indexed by the N400, does not differ between preschoolers with SLI and typical development when highly familiar words are presented in isolation. However, the typically developing group demonstrated a more mature profile of postlexical reanalysis and integration, indexed by an emerging late positive component. The findings lay the necessary groundwork for better understanding processing of newly learned words in children with SLI.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2Dv98EK
via IFTTT

Metapragmatic Explicitation and Social Attribution in Social Communication Disorder and Developmental Language Disorder: A Comparative Study

Purpose
The purposes of this study are to investigate metapragmatic (MP) ability in 6–11-year-old children with social communication disorder (SCD), developmental language disorder (DLD), and typical language development and to explore factors associated with MP explicitation and social understanding (SU).
Method
In this cross-sectional study, all participants (N = 82) completed an experimental task, the Assessment of Metapragmatics (Collins et al., 2014), in which pragmatic errors are identified in filmed interactions. Responses were scored for complexity/type of explicitation (MP score) and attribution of social characteristics to the films' characters (SU score).
Results
Groups with SCD and DLD had significantly lower MP scores and less sophisticated explicitation than the group with typical language development. After controlling for language and age, the group with SCD had significantly lower SU scores than the group with DLD. Significant correlations were found between MP scores and age/language ability but not with pragmatic impairment.
Conclusions
Children with SCD or DLD performed poorly on an MP task compared with children who are typically developing but do not differ from each other in ability to reflect verbally on pragmatic features in interactions. MP ability appears to be closely related to structural language ability. The limited ability of children with SCD to attribute social/psychological states to interlocutors may indicate additional social attribution limitations.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2FHrE2k
via IFTTT

Influence of Language Load on Speech Motor Skill in Children With Specific Language Impairment

Purpose
Children with specific language impairment (SLI) show particular deficits in the generation of sequenced action: the quintessential procedural task. Practiced imitation of a sequence may become rote and require reduced procedural memory. This study explored whether speech motor deficits in children with SLI occur generally or only in conditions of high linguistic load, whether speech motor deficits diminish with practice, and whether it is beneficial to incorporate conditions of high load to understand speech production.
Method
Children with SLI and typical development participated in a syntactic priming task during which they generated sentences (high linguistic load) and, then, practiced repeating a sentence (low load) across 3 sessions. We assessed phonetic accuracy, speech movement variability, and duration.
Results
Children with SLI produced more variable articulatory movements than peers with typical development in the high load condition. The groups converged in the low load condition. Children with SLI continued to show increased articulatory stability over 3 practice sessions. Both groups produced generated sentences with increased duration and variability compared with repeated sentences.
Conclusions
Linguistic demands influence speech motor production. Children with SLI show reduced speech motor performance in tasks that require language generation but not when task demands are reduced in rote practice.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FEiEev
via IFTTT

Acoustic Predictors of Pediatric Dysarthria in Cerebral Palsy

Purpose
The objectives of this study were to identify acoustic characteristics of connected speech that differentiate children with dysarthria secondary to cerebral palsy (CP) from typically developing children and to identify acoustic measures that best detect dysarthria in children with CP.
Method
Twenty 5-year-old children with dysarthria secondary to CP were compared to 20 age- and sex-matched typically developing children on 5 acoustic measures of connected speech. A logistic regression approach was used to derive an acoustic model that best predicted dysarthria status.
Results
Results indicated that children with dysarthria secondary to CP differed from typically developing children on measures of multiple segmental and suprasegmental speech characteristics. An acoustic model containing articulation rate and the F2 range of diphthongs differentiated children with dysarthria from typically developing children with 87.5% accuracy.
Conclusion
This study serves as a first step toward developing an acoustic model that can be used to improve early identification of dysarthria in children with CP.

from #Audiology via ola Kala on Inoreader http://ift.tt/2Dv9dIy
via IFTTT

Vocalization Subsystem Responses to a Temporarily Induced Unilateral Vocal Fold Paralysis

Purpose
The purpose of this study is to quantify the interactions of the 3 vocalization subsystems of respiration, phonation, and resonance before, during, and after a perturbation to the larynx (temporarily induced unilateral vocal fold paralysis) in 10 vocally healthy participants. Using dynamic systems theory as a guide, we hypothesized that data groupings would emerge revealing context-dependent patterns in the relationships of variables representing the 3 vocalization subsystems. We also hypothesized that group data would mask important individual variability important to understanding the relationships among the vocalization subsystems.
Method
A perturbation paradigm was used to obtain respiratory kinematic, aerodynamic, and acoustic formant measures from 10 healthy participants (8 women, 2 men) with normal voices. Group and individual data were analyzed to provide a multilevel analysis of the data. A 3-dimensional state space model was constructed to demonstrate the interactive relationships among the 3 subsystems before, during, and after perturbation.
Results
During perturbation, group data revealed that lung volume initiations and terminations were lower, with longer respiratory excursions; airflow rates increased while subglottic pressures were maintained. Acoustic formant measures indicated that the spacing between the upper formants decreased (F3–F5), whereas the spacing between F1 and F2 increased. State space modeling revealed the changing directionality and interactions among the 3 subsystems.
Conclusions
Group data alone masked important variability necessary to understand the unique relationships among the 3 subsystems. Multilevel analysis permitted a richer understanding of the individual differences in phonatory regulation and permitted subgroup analysis. Dynamic systems theory may be a useful heuristic to model the interactive relationships among vocalization subsystems.
Supplemental Material
https://doi.org/10.23641/asha.5913532

from #Audiology via ola Kala on Inoreader http://ift.tt/2GxWtE2
via IFTTT

The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

Purpose
The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method
We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.
Results
Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.
Conclusions
Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.
Supplemental Materials
https://doi.org/10.23641/asha.5848059

from #Audiology via ola Kala on Inoreader http://ift.tt/2FLQryq
via IFTTT

Pitch and Time Processing in Speech and Tones: The Effects of Musical Training and Attention

Purpose
Musical training is often linked to enhanced auditory discrimination, but the relative roles of pitch and time in music and speech are unclear. Moreover, it is unclear whether pitch and time processing are correlated across individuals and how they may be affected by attention. This study aimed to examine pitch and time processing in speech and tone sequences, taking musical training and attention into account.
Method
Musicians (16) and nonmusicians (16) were asked to detect pitch or timing changes in speech and tone sequences and make a binary response. In some conditions, the participants were focused on 1 aspect of the stimulus (directed attention), and in others, they had to pay attention to all aspects at once (divided attention).
Results
As expected, musicians performed better overall. Performance scores on pitch and time tasks were correlated, as were performance scores for speech and tonal stimuli, but most markedly in musicians. All participants performed better on the directed versus divided attention task, but again, musicians performed better than nonmusicians.
Conclusion
In general, this experiment shows that individuals with a better sense of pitch discrimination also have a better sense of timing discrimination in the auditory domain. In addition, although musicians perform better overall, these results do not support the idea that musicians have an added advantage for divided attention tasks. These findings serve to better understand how musical training and attention affect pitch and time processing in the context of speech and tones and may have applications in special populations.
Supplemental Material
https://doi.org/10.23641/asha.5895997

from #Audiology via ola Kala on Inoreader http://ift.tt/2FCVqoV
via IFTTT

Implementation Research: Embracing Practitioners' Views

Purpose
This research explores practitioners' perspectives during the implementation of triadic gaze intervention (TGI), an evidence-based protocol for assessing and planning treatment targeting gaze as an early signal of intentional communication for young children with physical disabilities.
Method
Using qualitative methods, 7 practitioners from 1 early intervention center reported their perceptions about (a) early intervention for young children with physical disabilities, (b) acceptability and feasibility in the use of the TGI protocol in routine practice, and (c) feasibility of the TGI training. Qualitative data were gathered from 2 semistructured group interviews, once before and once after TGI training and implementation.
Results
Qualitative results documented the practitioners' reflections on recent changes to early intervention service delivery, the impact of such change on TGI adoption, and an overall strong enthusiasm for the TGI protocol, despite some need for adaptation.
Conclusion
These results are discussed relative to adapting the TGI protocol and training, when considering how to best bring about change in practice. More broadly, results highlighted the critical role of researcher–practitioner collaboration in implementation research and the value of qualitative data for gaining a richer understanding of practitioners' perspectives about the implementation process.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DvvGVX
via IFTTT

Deep Brain Stimulation of the Subthalamic Nucleus Parameter Optimization for Vowel Acoustics and Speech Intelligibility in Parkinson's Disease

Purpose
The settings of 3 electrical stimulation parameters were adjusted in 12 speakers with Parkinson's disease (PD) with deep brain stimulation of the subthalamic nucleus (STN-DBS) to examine their effects on vowel acoustics and speech intelligibility.
Method
Participants were tested under permutations of low, mid, and high STN-DBS frequency, voltage, and pulse width settings. At each session, participants recited a sentence. Acoustic characteristics of vowel production were extracted, and naive listeners provided estimates of speech intelligibility.
Results
Overall, lower-frequency STN-DBS stimulation (60 Hz) was found to lead to improvements in intelligibility and acoustic vowel expansion. An interaction between speaker sex and STN-DBS stimulation was found for vowel measures. The combination of low frequency, mid to high voltage, and low to mid pulse width led to optimal speech outcomes; however, these settings did not demonstrate significant speech outcome differences compared with the standard clinical STN-DBS settings, likely due to substantial individual variability.
Conclusions
Although lower-frequency STN-DBS stimulation was found to yield consistent improvements in speech outcomes, it was not found to necessarily lead to the best speech outcomes for all participants. Nevertheless, frequency may serve as a starting point to explore settings that will optimize an individual's speech outcomes following STN-DBS surgery.
Supplemental Material
https://doi.org/10.23641/asha.5899228

from #Audiology via ola Kala on Inoreader http://ift.tt/2GwCvtg
via IFTTT

Targeting Complex Sentences in Older School Children With Specific Language Impairment: Results From an Early-Phase Treatment Study

Purpose
This study investigated the effects of a complex sentence treatment at 2 dosage levels on language performance of 30 school-age children ages 10–14 years with specific language impairment.
Method
Three types of complex sentences (adverbial, object complement, relative) were taught in sequence in once or twice weekly dosage conditions. Outcome measures included sentence probes administered at baseline, treatment, and posttreatment phases and comparisons of pre–post performance on oral and written language tests and tasks. Relationships between pretest variables and treatment outcomes were also explored.
Results
Treatment was effective at improving performance on the sentence probes for the majority of participants; however, results differed by sentence type, with the largest effect sizes for adverbial and relative clauses. Significant and clinically meaningful pre–post treatment gains were found on a comprehensive oral language test, but not on reading and writing measures. There was no treatment advantage for the higher dosage group. Several significant correlations indicated a relationship between lower pretest scores and higher outcome measures.
Conclusions
Results suggest that a focused intervention can produce improvements in complex sentence productions of older school children with language impairment. Future research should explore ways to maximize gains and extend impact to natural language contexts.
Supplemental Material
https://doi.org/10.23641/asha.5923318

from #Audiology via ola Kala on Inoreader http://ift.tt/2Du5k6D
via IFTTT

Dysarthria in Mandarin-Speaking Children With Cerebral Palsy: Speech Subsystem Profiles

Purpose
This study explored the speech characteristics of Mandarin-speaking children with cerebral palsy (CP) and typically developing (TD) children to determine (a) how children in the 2 groups may differ in their speech patterns and (b) the variables correlated with speech intelligibility for words and sentences.
Method
Data from 6 children with CP and a clinical diagnosis of moderate dysarthria were compared with data from 9 TD children using a multiple speech subsystems approach. Acoustic and perceptual variables reflecting 3 speech subsystems (articulatory-phonetic, phonatory, and prosodic), and speech intelligibility, were measured based on speech samples obtained from the Test of Children's Speech Intelligibility in Mandarin (developed in the lab for the purpose of this research).
Results
The CP and TD children differed in several aspects of speech subsystem function. Speech intelligibility scores in children with CP were influenced by all 3 speech subsystems, but articulatory-phonetic variables had the highest correlation with word intelligibility. All 3 subsystems influenced sentence intelligibility.
Conclusion
Children with CP demonstrated deficits in speech intelligibility and articulation compared with TD children. Better speech sound articulation influenced higher word intelligibility, but did not benefit sentence intelligibility.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FDZmpN
via IFTTT

Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users

Purpose
The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions.
Method
First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables.
Results
The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average.
Conclusions
We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FLRLBw
via IFTTT

Development of Velopharyngeal Closure for Vocalization During the First 2 Years of Life

Purpose
The vocalizations of young infants often sound nasalized, suggesting that the velopharynx is open during the 1st few months of life. Whereas acoustic and perceptual studies seemed to support the idea that the velopharynx closes for vocalization by about 4 months of age, an aeromechanical study contradicted this (Thom, Hoit, Hixon, & Smith, 2006). Thus, the current large-scale investigation was undertaken to determine when the velopharynx closes for speech production by following infants during their first 2 years of life.
Method
This longitudinal study used nasal ram pressure to determine the status of the velopharynx (open or closed) during spontaneous speech production in 92 participants (46 male, 46 female) studied monthly from age 4 to 24 months.
Results
The velopharynx was closed during at least 90% of the utterances by 19 months, though there was substantial variability across participants. When considered by sound category, the velopharynx was closed from most to least often during production of oral obstruents, approximants, vowels (only), and glottal obstruents. No sex effects were observed.
Conclusion
Velopharyngeal closure for spontaneous speech production can be considered complete by 19 months, but closure occurs earlier for speech sounds with higher oral pressure demands.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FHaZMA
via IFTTT

Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech

Purpose
Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to improve their accuracy and reproducibility.
Method
Basic information is put together from standards, technical, voice and speech literature, and practical experience of the authors and is explained for nontechnical readers.
Results
Variation of SPL with distance, sound level meters and their accuracy, frequency and time weightings, and background noise topics are reviewed. Several calibration procedures for SPL measurements are described for stand-mounted and head-mounted microphones.
Conclusions
SPL of voice and speech should be reported together with the mouth-to-microphone distance so that the levels can be related to vocal power. Sound level measurement settings (i.e., frequency weighting and time weighting/averaging) should always be specified. Classified sound level meters should be used to assure measurement accuracy. Head-mounted microphones placed at the proximity of the mouth improve signal-to-noise ratio and can be taken advantage of for voice SPL measurements when calibrated. Background noise levels should be reported besides the sound levels of voice and speech.

from #Audiology via ola Kala on Inoreader http://ift.tt/2DvKKTd
via IFTTT

Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

Purpose
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data.
Method
We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile–quantile plots.
Results
We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not.
Conclusions
We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

from #Audiology via ola Kala on Inoreader http://ift.tt/2GvXCvR
via IFTTT

What Does a Cue Do? Comparing Phonological and Semantic Cues for Picture Naming in Aphasia

Purpose
Impaired naming is one of the most common symptoms in aphasia, often treated with cued picture naming paradigms. It has been argued that semantic cues facilitate the reliable categorization of the picture, and phonological cues facilitate the retrieval of target phonology. To test these hypotheses, we compared the effectiveness of phonological and semantic cues in picture naming for a group of individuals with aphasia. To establish the locus of effective cueing, we also tested whether cue type interacted with lexical and image properties of the targets.
Method
Individuals with aphasia (n = 10) were tested with a within-subject design. They named a large set of items (n = 175) 4 times. Each presentation of the items was accompanied by a different cueing condition (phonological, semantic, nonassociated word and tone). Item level variables for the targets (i.e., phoneme length, frequency, imageability, name agreement, and visual complexity) were used to test the interaction of cue type and item variables. Naming accuracy data were analyzed using generalized linear mixed effects models.
Results
Phonological cues were more effective than semantic cues, improving accuracy across individuals. However, phonological cues did not interact with phonological or lexical aspects of the picture names (e.g., phoneme length, frequency). Instead, they interacted with properties of the picture itself (i.e., visual complexity), such that phonological cues improved naming accuracy for items with low visual complexity.
Conclusions
The findings challenge the theoretical assumptions that phonological cues map to phonological processes. Instead, phonological information benefits the earliest stages of picture recognition, aiding the initial categorization of the target. The data help to explain why patterns of cueing are not consistent in aphasia; that is, it is not the case that phonological impairments always benefit from phonological cues and semantic impairments form semantic cues. A substantial amount of the literature in naming therapy focuses on picture naming paradigms. Therefore, the results are also critically important for rehabilitation, allowing for therapy development to be more rooted in the true mechanisms through which cues are processed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FMDhRV
via IFTTT

Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

Purpose
Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with language impairment, speech delay, and typically developing peers.
Method
Speech perception was measured by discrimination of synthesized speech syllable continua that varied in frequency (/dɑ/–/ɡɑ/). Groups were classified by performance on speech and language assessments and compared on syllable discrimination thresholds. Within-group variability was also evaluated.
Results
Children with CAS without language impairment did not significantly differ in syllable discrimination compared to typically developing peers. In contrast, those with CAS and language impairment showed significantly poorer syllable discrimination abilities compared to children with CAS only and typically developing peers. Children with speech delay and language impairment also showed significantly poorer discrimination abilities, with appreciable within-group variability.
Conclusions
These findings suggest that speech perception deficits are not a core feature of CAS but rather occur with co-occurring language impairment in a subset of children with CAS. This study establishes the significance of accounting for language ability in children with CAS.
Supplemental Materials
https://doi.org/10.23641/asha.5848056

from #Audiology via ola Kala on Inoreader http://ift.tt/2FFIZJn
via IFTTT

Masked Repetition Priming Treatment for Anomia

Purpose
Masked priming has been suggested as a way to directly target implicit lexical retrieval processes in aphasia. This study was designed to investigate repeated use of masked repetition priming to improve picture naming in individuals with anomia due to aphasia.
Method
A single-subject, multiple-baseline design was used across 6 people with aphasia. Training involved repeated exposure to pictures that were paired with masked identity primes or sham primes. Two semantic categories were trained in series for each participant. Analyses assessed treatment effects, generalization within and across semantic categories, and effects on broader language skills, immediately and 3 months after treatment.
Results
Four of the 6 participants improved in naming trained items immediately after treatment. Improvements were generally greater for items that were presented in training with masked identity primes than items that were presented repeatedly during training with masked sham primes. Generalization within and across semantic categories was limited. Generalization to broader language skills was inconsistent.
Conclusion
Masked repetition priming may improve naming for some individuals with anomia due to aphasia. A number of methodological and theoretical insights into further development of this treatment approach are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2FJGmSt
via IFTTT

Speech Adaptation to Kinematic Recording Sensors: Perceptual and Acoustic Findings

Purpose
This study used perceptual and acoustic measures to examine the time course of speech adaptation after the attachment of electromagnetic sensor coils to the tongue, lips, and jaw.
Method
Twenty native English speakers read aloud stimulus sentences before the attachment of the sensors, immediately after attachment, and again 5, 10, 15, and 20 min later. They read aloud continuously between recordings to encourage adaptation. Sentence recordings were perceptually evaluated by 20 native English listeners, who rated 150 stimuli (which included 31 samples that were repeated to assess rater reliability) using a visual analog scale with the end points labeled as “precise” and “imprecise.” Acoustic analysis began by segmenting and measuring the duration of the fricatives /s/ and /ʃ/ as well as the whole sentence. The spectral center of gravity and spectral standard deviation of the 2 fricatives were measured using Praat. These phonetic targets were selected because the standard placement of sensor coils on the lingual surface was anticipated to interfere with normal fricative production, causing them to become distorted.
Results
Perceptual ratings revealed a decrease in speech precision after sensor attachment and evidence of adaptation over time; there was little perceptual change beyond the 10-min recording. The spectral center of gravity for /s/ decreased, and the spectral standard deviation for /ʃ/ increased after sensor attachment, but the acoustic measures showed no evidence of adaptation over time.
Conclusion
The findings suggest that 10 min may be sufficient time to allow speakers to adapt before experimental data collection with Northern Digital Instruments Wave electromagnetic sensors.

from #Audiology via ola Kala on Inoreader http://ift.tt/2GujIyF
via IFTTT