Πέμπτη 19 Οκτωβρίου 2017

October 24, 2017 – “The Great Give” Returns to SDSU!

visit The Great Give websitevisit The Great Give website

Last year, “The Great Give” was a huge success, with over $137,000 raised in this monumental 24 hour period of giving.

Consider contributing this year when “The Great Give” returns on October 24, 2017!

You may support the School of Speech, Language, and Hearing Sciences directly by using the donation link below:

 



from #Audiology via ola Kala on Inoreader http://ift.tt/2zmtaPO
via IFTTT

October 24, 2017 – “The Great Give” Returns to SDSU!

visit The Great Give websitevisit The Great Give website

Last year, “The Great Give” was a huge success, with over $137,000 raised in this monumental 24 hour period of giving.

Consider contributing this year when “The Great Give” returns on October 24, 2017!

You may support the School of Speech, Language, and Hearing Sciences directly by using the donation link below:

 



from #Audiology via ola Kala on Inoreader http://ift.tt/2zmtaPO
via IFTTT

October 24, 2017 – “The Great Give” Returns to SDSU!

visit The Great Give websitevisit The Great Give website

Last year, “The Great Give” was a huge success, with over $137,000 raised in this monumental 24 hour period of giving.

Consider contributing this year when “The Great Give” returns on October 24, 2017!

You may support the School of Speech, Language, and Hearing Sciences directly by using the donation link below:

 



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2zmtaPO
via IFTTT

Consonant Age-of-Acquisition Effects in Nonword Repetition Are Not Articulatory in Nature

Purpose
Most research examining long-term-memory effects on nonword repetition (NWR) has focused on lexical-level variables. Phoneme-level variables have received little attention, although there are reasons to expect significant sublexical effects in NWR. To further understand the underlying processes of NWR, this study examined effects of sublexical long-term phonological knowledge by testing whether performance differs when the stimuli comprise consonants acquired later versus earlier in speech development.
Method
Thirty (Experiment 1) and 20 (Experiment 2) college students completed tasks that investigated whether an experimental phoneme-level variable (consonant age of acquisition) similarly affects NWR and lexical-access tasks designed to vary in articulatory, auditory-perceptual, and phonological short-term-memory demands. The lexical-access tasks were performed in silence or with concurrent articulation to explore whether consonant age-of-acquisition effects arise before or after articulatory planning.
Results
NWR accuracy decreased on items comprising later- versus earlier-acquired phonemes. Similar consonant age-of-acquisition effects were observed in accuracy measures of nonword reading and lexical decision performed in silence or with concurrent articulation.
Conclusion
Results indicate that NWR performance is sensitive to phoneme-level phonological knowledge in long-term memory. NWR, accordingly, should not be regarded as a diagnostic tool for pure impairment of phonological short-term memory.
Supplemental Materials
http://ift.tt/2hQu7Jj

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0359/2659551/Consonant-AgeofAcquisition-Effects-in-Nonword
via IFTTT

Consonant Age-of-Acquisition Effects in Nonword Repetition Are Not Articulatory in Nature

Purpose
Most research examining long-term-memory effects on nonword repetition (NWR) has focused on lexical-level variables. Phoneme-level variables have received little attention, although there are reasons to expect significant sublexical effects in NWR. To further understand the underlying processes of NWR, this study examined effects of sublexical long-term phonological knowledge by testing whether performance differs when the stimuli comprise consonants acquired later versus earlier in speech development.
Method
Thirty (Experiment 1) and 20 (Experiment 2) college students completed tasks that investigated whether an experimental phoneme-level variable (consonant age of acquisition) similarly affects NWR and lexical-access tasks designed to vary in articulatory, auditory-perceptual, and phonological short-term-memory demands. The lexical-access tasks were performed in silence or with concurrent articulation to explore whether consonant age-of-acquisition effects arise before or after articulatory planning.
Results
NWR accuracy decreased on items comprising later- versus earlier-acquired phonemes. Similar consonant age-of-acquisition effects were observed in accuracy measures of nonword reading and lexical decision performed in silence or with concurrent articulation.
Conclusion
Results indicate that NWR performance is sensitive to phoneme-level phonological knowledge in long-term memory. NWR, accordingly, should not be regarded as a diagnostic tool for pure impairment of phonological short-term memory.
Supplemental Materials
http://ift.tt/2hQu7Jj

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0359/2659551/Consonant-AgeofAcquisition-Effects-in-Nonword
via IFTTT

Consonant Age-of-Acquisition Effects in Nonword Repetition Are Not Articulatory in Nature

Purpose
Most research examining long-term-memory effects on nonword repetition (NWR) has focused on lexical-level variables. Phoneme-level variables have received little attention, although there are reasons to expect significant sublexical effects in NWR. To further understand the underlying processes of NWR, this study examined effects of sublexical long-term phonological knowledge by testing whether performance differs when the stimuli comprise consonants acquired later versus earlier in speech development.
Method
Thirty (Experiment 1) and 20 (Experiment 2) college students completed tasks that investigated whether an experimental phoneme-level variable (consonant age of acquisition) similarly affects NWR and lexical-access tasks designed to vary in articulatory, auditory-perceptual, and phonological short-term-memory demands. The lexical-access tasks were performed in silence or with concurrent articulation to explore whether consonant age-of-acquisition effects arise before or after articulatory planning.
Results
NWR accuracy decreased on items comprising later- versus earlier-acquired phonemes. Similar consonant age-of-acquisition effects were observed in accuracy measures of nonword reading and lexical decision performed in silence or with concurrent articulation.
Conclusion
Results indicate that NWR performance is sensitive to phoneme-level phonological knowledge in long-term memory. NWR, accordingly, should not be regarded as a diagnostic tool for pure impairment of phonological short-term memory.
Supplemental Materials
http://ift.tt/2hQu7Jj

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0359/2659551/Consonant-AgeofAcquisition-Effects-in-Nonword
via IFTTT

Our October Issue Is Here



from #Audiology via ola Kala on Inoreader http://article/2657392/Our-October-Issue-Is-Here
via IFTTT

Our October Issue Is Here



from #Audiology via ola Kala on Inoreader http://article/2657392/Our-October-Issue-Is-Here
via IFTTT

Our October Issue Is Here



from #Audiology via xlomafota13 on Inoreader http://article/2657392/Our-October-Issue-Is-Here
via IFTTT

Preliminary Evidence That Growth in Productive Language Differentiates Childhood Stuttering Persistence and Recovery

Purpose
Childhood stuttering is common but is often outgrown. Children whose stuttering persists experience significant life impacts, calling for a better understanding of what factors may underlie eventual recovery. In previous research, language ability has been shown to differentiate children who stutter (CWS) from children who do not stutter, yet there is an active debate in the field regarding what, if any, language measures may mark eventual recovery versus persistence. In this study, we examined whether growth in productive language performance may better predict the probability of recovery compared to static profiles taken from a single time point.
Method
Productive syntax and vocabulary diversity growth rates were calculated for 50 CWS using random coefficient models. Logistic regression models were then used to determine whether growth rates uniquely predict likelihood of recovery, as well as if these rates were predictive over and above currently identified correlates of stuttering onset and recovery.
Results
Different linguistic profiles emerged between children who went on to recover versus those who persisted. Children who had steeper productive syntactic growth, but not vocabulary diversity growth, were more likely to recover by study end. Moreover, this effect held after controlling for initial language ability at study onset as well as demographic covariates.
Conclusions
Results are discussed in terms of how growth estimates can be incorporated in recommendations for fostering productive language skills among CWS. The need for additional research on language in early stuttering and recovery is suggested.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0371/2657677/Preliminary-Evidence-That-Growth-in-Productive
via IFTTT

Preliminary Evidence That Growth in Productive Language Differentiates Childhood Stuttering Persistence and Recovery

Purpose
Childhood stuttering is common but is often outgrown. Children whose stuttering persists experience significant life impacts, calling for a better understanding of what factors may underlie eventual recovery. In previous research, language ability has been shown to differentiate children who stutter (CWS) from children who do not stutter, yet there is an active debate in the field regarding what, if any, language measures may mark eventual recovery versus persistence. In this study, we examined whether growth in productive language performance may better predict the probability of recovery compared to static profiles taken from a single time point.
Method
Productive syntax and vocabulary diversity growth rates were calculated for 50 CWS using random coefficient models. Logistic regression models were then used to determine whether growth rates uniquely predict likelihood of recovery, as well as if these rates were predictive over and above currently identified correlates of stuttering onset and recovery.
Results
Different linguistic profiles emerged between children who went on to recover versus those who persisted. Children who had steeper productive syntactic growth, but not vocabulary diversity growth, were more likely to recover by study end. Moreover, this effect held after controlling for initial language ability at study onset as well as demographic covariates.
Conclusions
Results are discussed in terms of how growth estimates can be incorporated in recommendations for fostering productive language skills among CWS. The need for additional research on language in early stuttering and recovery is suggested.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0371/2657677/Preliminary-Evidence-That-Growth-in-Productive
via IFTTT

Preliminary Evidence That Growth in Productive Language Differentiates Childhood Stuttering Persistence and Recovery

Purpose
Childhood stuttering is common but is often outgrown. Children whose stuttering persists experience significant life impacts, calling for a better understanding of what factors may underlie eventual recovery. In previous research, language ability has been shown to differentiate children who stutter (CWS) from children who do not stutter, yet there is an active debate in the field regarding what, if any, language measures may mark eventual recovery versus persistence. In this study, we examined whether growth in productive language performance may better predict the probability of recovery compared to static profiles taken from a single time point.
Method
Productive syntax and vocabulary diversity growth rates were calculated for 50 CWS using random coefficient models. Logistic regression models were then used to determine whether growth rates uniquely predict likelihood of recovery, as well as if these rates were predictive over and above currently identified correlates of stuttering onset and recovery.
Results
Different linguistic profiles emerged between children who went on to recover versus those who persisted. Children who had steeper productive syntactic growth, but not vocabulary diversity growth, were more likely to recover by study end. Moreover, this effect held after controlling for initial language ability at study onset as well as demographic covariates.
Conclusions
Results are discussed in terms of how growth estimates can be incorporated in recommendations for fostering productive language skills among CWS. The need for additional research on language in early stuttering and recovery is suggested.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0371/2657677/Preliminary-Evidence-That-Growth-in-Productive
via IFTTT

Internet-Based Self-Help for Ménière's Disease: Details and Outcome of a Single-Group Open Trial

Purpose
In this article, we present the details and the pilot outcome of an Internet-based self-help program for Ménière's disease (MD).
Method
The Norton–Kaplan model is applied to construct a strategic, person-focused approach in the enablement process. The program assesses the disorder profile and diagnosis. In the therapeutic component of the program, the participant defines vision and time frame, inspects confounding factors, determines goals, establishes a strategy, and starts to work on the important problems caused by the disorder. The program works interactively, utilizes collaboration with significant others, and enhances positive thinking. Participants took part in an Internet-based self-help program. Data were collected interactively using open-ended and structured questionnaires on various disease-specific and general health aspects. The pilot outcome of 41 patients with MD was evaluated.
Results
The analysis of the pilot data showed statistically significant improvement in their general health-related quality of life (p < .001). Also, the outcome of the Posttraumatic Growth Inventory (Cann et al., 2010) showed small to moderate change as a result of the intervention.
Conclusions
The Internet-based self-help program can be helpful in the rehabilitation of patients with MD to supplement medical therapy.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_AJA-16-0068/2657617/InternetBased-SelfHelp-for-M%C3%A9ni%C3%A8res-Disease
via IFTTT

Internet-Based Self-Help for Ménière's Disease: Details and Outcome of a Single-Group Open Trial

Purpose
In this article, we present the details and the pilot outcome of an Internet-based self-help program for Ménière's disease (MD).
Method
The Norton–Kaplan model is applied to construct a strategic, person-focused approach in the enablement process. The program assesses the disorder profile and diagnosis. In the therapeutic component of the program, the participant defines vision and time frame, inspects confounding factors, determines goals, establishes a strategy, and starts to work on the important problems caused by the disorder. The program works interactively, utilizes collaboration with significant others, and enhances positive thinking. Participants took part in an Internet-based self-help program. Data were collected interactively using open-ended and structured questionnaires on various disease-specific and general health aspects. The pilot outcome of 41 patients with MD was evaluated.
Results
The analysis of the pilot data showed statistically significant improvement in their general health-related quality of life (p < .001). Also, the outcome of the Posttraumatic Growth Inventory (Cann et al., 2010) showed small to moderate change as a result of the intervention.
Conclusions
The Internet-based self-help program can be helpful in the rehabilitation of patients with MD to supplement medical therapy.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0068/2657617/InternetBased-SelfHelp-for-M%C3%A9ni%C3%A8res-Disease
via IFTTT

Internet-Based Self-Help for Ménière's Disease: Details and Outcome of a Single-Group Open Trial

Purpose
In this article, we present the details and the pilot outcome of an Internet-based self-help program for Ménière's disease (MD).
Method
The Norton–Kaplan model is applied to construct a strategic, person-focused approach in the enablement process. The program assesses the disorder profile and diagnosis. In the therapeutic component of the program, the participant defines vision and time frame, inspects confounding factors, determines goals, establishes a strategy, and starts to work on the important problems caused by the disorder. The program works interactively, utilizes collaboration with significant others, and enhances positive thinking. Participants took part in an Internet-based self-help program. Data were collected interactively using open-ended and structured questionnaires on various disease-specific and general health aspects. The pilot outcome of 41 patients with MD was evaluated.
Results
The analysis of the pilot data showed statistically significant improvement in their general health-related quality of life (p < .001). Also, the outcome of the Posttraumatic Growth Inventory (Cann et al., 2010) showed small to moderate change as a result of the intervention.
Conclusions
The Internet-based self-help program can be helpful in the rehabilitation of patients with MD to supplement medical therapy.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0068/2657617/InternetBased-SelfHelp-for-M%C3%A9ni%C3%A8res-Disease
via IFTTT

Working Memory and Speech Comprehension in Older Adults With Hearing Impairment

Purpose
This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB).
Method
Twenty-four older (59–73 years) adults with sensorineural HI participated. WM capacity (WMC) was measured using 3 complex span tasks. Speech comprehension was assessed using multiple passages, and speech identification ability was measured using recall of sentence final-word and key words. Speech measures were performed in quiet and in the presence of MTB at + 5 dB signal-to-noise ratio.
Results
Results suggested that participants' speech identification was poorer in MTB, but their ability to comprehend discourse in MTB was at least as good as in quiet. WMC did not explain significant variance in speech comprehension before and after controlling for age and audibility. However, WMC explained significant variance in low-context sentence key words identification in MTB.
Conclusions
These results suggest that WMC plays an important role in identifying low-context sentences in MTB, but not when comprehending semantically rich discourse passages. In general, data did not support individual variability in WMC as a factor that predicts speech comprehension ability in older adults with HI.

from #Audiology via ola Kala on Inoreader http://article/60/10/2949/2657619/Working-Memory-and-Speech-Comprehension-in-Older
via IFTTT

Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

Purpose
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article.
Method
This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources.
Results
The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations.
Conclusions
Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation.
Presentation Video
http://ift.tt/2yzJXA3

from #Audiology via ola Kala on Inoreader http://article/60/10/3027/2659422/Enhancing-Auditory-Selective-Attention-Using-a
via IFTTT

Investigating the Role of Salivary Cortisol on Vocal Symptoms

Purpose
We investigated whether participants who reported more often occurring vocal symptoms showed higher salivary cortisol levels and if such possible associations were different for men and women.
Method
The participants (N = 170; men n = 49, women n = 121) consisted of a population-based sample of Finnish twins born between 1961 and 1989. The participants submitted saliva samples for hormone analysis and completed a web questionnaire including questions regarding the occurrence of 6 vocal symptoms during the past 12 months. The data were analyzed using the generalized estimated equations method.
Results
A composite variable of the vocal symptoms showed a significant positive association with salivary cortisol levels (p < .001). Three of the 6 vocal symptoms were significantly associated with the level of cortisol when analyzed separately (p values less than .05). The results showed no gender difference regarding the effect of salivary cortisol on vocal symptoms.
Conclusions
There was a positive association between the occurrence of vocal symptoms and salivary cortisol levels. Participants with higher cortisol levels reported more often occurring vocal symptoms. This could have a connection to the influence of stress on vocal symptoms because stress is a known risk factor of vocal symptoms and salivary cortisol can be seen as a biomarker for stress.

from #Audiology via ola Kala on Inoreader http://article/60/10/2781/2654587/Investigating-the-Role-of-Salivary-Cortisol-on
via IFTTT

Auditory Scene Analysis: An Attention Perspective

Purpose
This review article provides a new perspective on the role of attention in auditory scene analysis.
Method
A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity.
Conclusions
A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources.
Presentation Video
http://ift.tt/2x8vHwE

from #Audiology via ola Kala on Inoreader http://article/60/10/2989/2659418/Auditory-Scene-Analysis-An-Attention-Perspective
via IFTTT

The Influence of Executive Functions on Phonemic Processing in Children Who Do and Do Not Stutter

Purpose
The aim of the present study was to investigate dual-task performance in children who stutter (CWS) and those who do not to investigate if the groups differed in the ability to attend and allocate cognitive resources effectively during task performance.
Method
Participants were 24 children (12 CWS) in both groups matched for age and sex. For the primary task, participants performed a phoneme monitoring in a picture–written word interference task. For the secondary task, participants made pitch judgments on tones presented at varying (short, long) stimulus onset asynchrony (SOA) from the onset of the picture.
Results
The CWS were comparable to the children who do not stutter in performing the monitoring task although the SOA-based performance differences in this task were more variable in the CWS. The CWS were also significantly slower in making tone decisions at the short SOA and showed a trend for making more errors in this task.
Conclusions
The findings are interpreted to suggest higher dual-task cost effects in CWS. A potential explanation for this finding requiring further testing and confirmation is that the CWS show reduced efficiency in attending to the tone stimuli while simultaneously prioritizing attention to the phoneme-monitoring task.

from #Audiology via ola Kala on Inoreader http://article/60/10/2792/2654663/The-Influence-of-Executive-Functions-on-Phonemic
via IFTTT

Error Type and Lexical Frequency Effects: Error Detection in Swedish Children With Language Impairment

Purpose
The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies.
Method
Error sensitivity for grammatical structures vulnerable in Swedish-speaking preschool children with LI (omission of the indefinite article in a noun phrase with a neuter/common noun, and use of the infinitive instead of past-tense regular and irregular verbs) was compared to a control error (singular noun instead of plural). Target structures involved a high-frequency (HF) or a low-frequency (LF) noun/verb. Grammatical and ungrammatical sentences were presented in headphones, and responses were collected through button presses.
Results
Children with LI had similar sensitivity to the plural control error as peers with typical language development, but lower sensitivity to past-tense errors and noun phrase errors. All children showed lexical frequency effects for errors involving verbs (HF > LF), and noun gender effects for noun phrase errors (common > neuter).
Conclusions
School-age children with LI may have subtle difficulties with morphosyntactic processing that mirror expressive difficulties in preschool children with LI. Lexical frequency may affect morphosyntactic processing, which has clinical implications for assessment of grammatical knowledge.

from #Audiology via ola Kala on Inoreader http://article/60/10/2924/2654583/Error-Type-and-Lexical-Frequency-Effects-Error
via IFTTT

Architecture of the Suprahyoid Muscles: A Volumetric Musculoaponeurotic Analysis

Purpose
Suprahyoid muscles play a critical role in swallowing. The arrangement of the fiber bundles and aponeuroses has not been investigated volumetrically, even though muscle architecture is an important determinant of function. Thus, the purpose was to digitize, model in three dimensions, and quantify the architectural parameters of the suprahyoid muscles to determine and compare their relative functional capabilities.
Method
Fiber bundles and aponeuroses from 11 formalin-embalmed specimens were serially dissected and digitized in situ. Data were reconstructed in three dimensions using Autodesk Maya. Architectural parameters were quantified, and data were compared using independent samples t-tests and analyses of variance.
Results
Based on architecture and attachment sites, suprahyoid muscles were divided into 3 groups: anteromedial, superolateral, and superoposterior. Architectural parameters differed significantly (p < .05) across muscles and across the 3 groups, suggesting differential roles in hyoid movement during swallowing. When activated simultaneously, anteromedial and superoposterior muscle groups could work together to elevate the hyoid.
Conclusions
The results suggest that the suprahyoid muscles can have individualized roles in hyoid excursion during swallowing. Muscle balance may be important for identifying and treating hyolaryngeal dysfunction in patients with dysphagia.

from #Audiology via ola Kala on Inoreader http://article/60/10/2808/2655032/Architecture-of-the-Suprahyoid-Muscles-A
via IFTTT

Introduction to the Research Symposium Forum

Purpose
The purpose of this introduction is to provide an overview of the articles contained within this research forum of JSLHR. Each of these articles is based upon presentations from the 2016 ASHA Research Symposium.

from #Audiology via ola Kala on Inoreader http://article/60/10/2974/2659416/Introduction-to-the-Research-Symposium-Forum
via IFTTT

Ordinary Interactions Challenge Proposals That Maternal Verbal Responses Shape Infant Vocal Development

Purpose
This study tested proposals that maternal verbal responses shape infant vocal development, proposals based in part on evidence that infants modified their vocalizations to match mothers' experimentally manipulated vowel or consonant–vowel responses to most (i.e., 70%–80%) infant vocalizations. We tested the proposal in ordinary rather than experimentally manipulated interactions.
Method
Response-based proposals were tested in a cross-sectional study of 35 infants, ages 4 to 14 months, engaged in everyday interactions in their homes with their mothers using a standard set of toys and picture books.
Results
Mothers responded to 30% of infant vocalizations with vocal behaviors of their own, far fewer than experimentally manipulated response rates. Moreover, mothers produced comparatively few vowel and consonant–vowel models and responded to infants' vowel and consonant–vowel vocalizations in similar numbers. Infants showed little evidence of systematically modifying their vocal forms to match maternal responses in these interactions. Instead, consonant–vowel vocalizations increased significantly with infant age.
Conclusions
Results obtained in ordinary interactions, rather than response manipulation, did not provide substantial support for response-based mechanisms of infant vocal development. Consistent with other research, however, consonant–vowel productions increased with infant age.

from #Audiology via ola Kala on Inoreader http://article/60/10/2819/2655031/Ordinary-Interactions-Challenge-Proposals-That
via IFTTT

Age-Related Changes in Objective and Subjective Speech Perception in Complex Listening Environments

Purpose
A frequent complaint by older adults is difficulty communicating in challenging acoustic environments. The purpose of this work was to review and summarize information about how speech perception in complex listening situations changes across the adult age range.
Method
This article provides a review of age-related changes in speech understanding in complex listening environments and summarizes results from several studies conducted in our laboratory.
Results
Both degree of high frequency hearing loss and cognitive test performance limit individuals' ability to understand speech in difficult listening situations as they age. The performance of middle-aged adults is similar to that of younger adults in the presence of noise maskers, but they experience substantially more difficulty when the masker is 1 or 2 competing speech messages. For the most part, middle-aged participants in studies conducted in our laboratory reported as much self-perceived hearing problems as did older adult participants.
Conclusions
Research supports the multifactorial nature of listening in real-world environments. Current audiologic assessment practices are often insufficient to identify the true speech understanding struggles that individuals experience in these situations. This points to the importance of giving weight to patients' self-reported difficulties.
Presentation Video
http://ift.tt/2yzJJZJ

from #Audiology via ola Kala on Inoreader http://article/60/10/3009/2659420/AgeRelated-Changes-in-Objective-and-Subjective
via IFTTT

The History of Stuttering by 7 Years of Age: Follow-Up of a Prospective Community Cohort

Purpose
For a community cohort of children confirmed to have stuttered by the age of 4 years, we report (a) the recovery rate from stuttering, (b) predictors of recovery, and (c) comorbidities at the age of 7 years.
Method
This study was nested in the Early Language in Victoria Study. Predictors of stuttering recovery included child, family, and environmental measures and first-degree relative history of stuttering. Comorbidities examined at 7 years included temperament, language, nonverbal cognition, and health-related quality of life.
Results
The recovery rate by the age of 7 years was 65%. Girls with stronger communication skills at the age of 2 years had higher odds of recovery (adjusted OR = 7.1, 95% CI [1.3, 37.9], p = .02), but similar effects were not evident for boys (adjusted OR = 0.5, 95% CI [0.3, 1.1], p = .10). At the age of 7 years, children who had recovered from stuttering were more likely to have stronger language skills than children whose stuttering persisted (p = .05). No evident differences were identified on other outcomes including nonverbal cognition, temperament, and parent-reported quality of life.
Conclusion
Overall, findings suggested that there may be associations between language ability and recovery from stuttering. Subsequent research is needed to explore the directionality of this relationship.

from #Audiology via ola Kala on Inoreader http://article/60/10/2828/2657162/The-History-of-Stuttering-by-7-Years-of-Age
via IFTTT

Effect of Linguistic and Musical Experience on Distributional Learning of Nonnative Lexical Tones

Purpose
Evidence suggests that extensive experience with lexical tones or musical training provides an advantage in perceiving nonnative lexical tones. This investigation concerns whether such an advantage is evident in learning nonnative lexical tones based on the distributional structure of the input.
Method
Using an established protocol, distributional learning of lexical tones was investigated with tone language (Mandarin) listeners with no musical training (Experiment 1) and nontone language (Australian English) listeners with musical training (Experiment 2). Within each experiment, participants were trained on a bimodal (2-peak) or a unimodal (single peak) distribution along a continuum spanning a Thai lexical tone minimal pair. Discrimination performance on the target minimal pair was assessed before and after training.
Results
Mandarin nonmusicians exhibited clear distributional learning (listeners in the bimodal, but not those in the unimodal condition, improved significantly as a function of training), whereas Australian English musicians did not (listeners in both the bimodal and unimodal conditions improved as a function of training).
Conclusions
Our findings suggest that veridical perception of lexical tones is not sufficient for distributional learning of nonnative lexical tones to occur. Rather, distributional learning appears to be modulated by domain-specific pitch experience and is constrained possibly by top-down interference.

from #Audiology via ola Kala on Inoreader http://article/60/10/2769/2610303/Effect-of-Linguistic-and-Musical-Experience-on
via IFTTT

The Effect of Stimulus Variability on Learning and Generalization of Reading in a Novel Script

Purpose
The benefit of stimulus variability for generalization of acquired skills and knowledge has been shown in motor, perceptual, and language learning but has rarely been studied in reading. We studied the effect of variable training in a novel language on reading trained and untrained words.
Method
Sixty typical adults received 2 sessions of training in reading an artificial script. Participants were assigned to 1 of 3 groups: a variable training group practicing a large set of 24 words, and 2 nonvariable training groups practicing a smaller set of 12 words, with twice the number of repetitions per word.
Results
Variable training resulted in higher accuracy for both trained and untrained items composed of the same graphemes, compared to the nonvariable training. Moreover, performance on untrained items was correlated with phonemic awareness only for the nonvariable training groups.
Conclusions
High stimulus variability increases the reliance on small unit decoding in adults reading in a novel script, which is beneficial for both familiar and novel words. These results show that the statistical properties of the input during reading acquisition influence the type of acquired knowledge and have theoretical and practical implications for planning efficient reading instruction methods.
Supplemental Material
http://ift.tt/2h7vgMh

from #Audiology via ola Kala on Inoreader http://article/60/10/2840/2654585/The-Effect-of-Stimulus-Variability-on-Learning-and
via IFTTT

A Systematic Review and Meta-Analysis of Predictors of Expressive-Language Outcomes Among Late Talkers

Purpose
The purpose of this study was to explore the literature on predictors of outcomes among late talkers using systematic review and meta-analysis methods. We sought to answer the question: What factors predict preschool-age expressive-language outcomes among late-talking toddlers?
Method
We entered carefully selected search terms into the following electronic databases: Communication & Mass Media Complete, ERIC, Medline, PsycEXTRA, Psychological and Behavioral Sciences, and PsycINFO. We conducted a separate, random-effects model meta-analysis for each individual predictor that was used in a minimum of 5 studies. We also tested potential moderators of the relationship between predictors and outcomes using metaregression and subgroup analysis. Last, we conducted publication-bias and sensitivity analyses.
Results
We identified 20 samples, comprising 2,134 children, in a systematic review. According to the results of the meta-analyses, significant predictors of expressive-language outcomes included toddlerhood expressive-vocabulary size, receptive language, and socioeconomic status. Nonsignificant predictors included phrase speech, gender, and family history.
Conclusions
To our knowledge this is the first synthesis of the literature on predictors of outcomes among late talkers using meta-analysis. Our findings clarify the contributions of several constructs to outcomes and highlight the importance of early receptive language to expressive-language development.
Supplemental Materials
http://ift.tt/2yeQuj2

from #Audiology via ola Kala on Inoreader http://article/60/10/2935/2654661/A-Systematic-Review-and-MetaAnalysis-of-Predictors
via IFTTT

Language Sample Analysis and Elicitation Technique Effects in Bilingual Children With and Without Language Impairment

Purpose
This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish–English-speaking children in the United States.
Method
Twenty bilingual Spanish–English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques.
Results
D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling.
Conclusion
Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish–English-speaking children.

from #Audiology via ola Kala on Inoreader http://article/60/10/2852/2654586/Language-Sample-Analysis-and-Elicitation-Technique
via IFTTT

Pressurized Wideband Absorbance Findings in Healthy Neonates: A Preliminary Study

Purpose
The present study aimed to establish normative data for wideband absorbance (WBA) measured at tympanometric peak pressure (TPP) and 0 daPa and to assess the test–retest reliability of both measurements in healthy neonates.
Method
Participants of this cross-sectional study included 99 full-term neonates (165 ears) with mean chronological age of 46.7 hrs (SD = 26.3 hrs). Of the 99 neonates, 58 were Malay, 28 were Indian, and 13 were Chinese. The neonates who passed high-frequency (1 kHz) tympanometry, acoustic stapedial reflex, and distortion product otoacoustic emission screening tests were assessed using a pressurized WBA test (wideband tympanometry). To reduce the number of measurement points, the WBA responses were averaged to 16 one-third octave frequency bands from 0.25 to 8 kHz. A mixed-model analysis of variance was applied to the data to investigate the effects of frequency, ear, gender, and ethnicity on WBA. The analysis of variance was also used to compare between WBA measured at TPP and 0 daPa. An interclass correlation coefficient test was applied at each of the 16 frequency bands to measure the test–retest reliability of WBA at TPP and 0 daPa.
Results
Both WBA measurements at TPP and 0 daPa exhibited a multipeaked pattern with 2 maxima at 1.25–1.6 kHz and 6.3 kHz and 2 minima at 0.5 and 4 kHz. The mean WBA measured at TPP was significantly higher than that measured at 0 daPa at 0.25, 0.4, 0.5, 1.25, and 1.6 kHz only. A normative data set was developed for absorbance at TPP and at 0 daPa. There was no significant effect of ethnicity, gender, and ear on both measurements of WBA. The test–retest reliability of WBA at TPP and 0 daPa was high with the interclass correlation coefficient ranging from 0.77 to 0.97 across the frequencies.
Conclusions
Normative data of WBA measured at TPP and 0 daPa for neonates were provided in the present study. Although WBA at TPP was slightly higher than the WBA measured at 0 daPa at some frequencies below 2 kHz, the WBA patterns of the 2 measurements were nearly identical. Moreover, the test–retest reliability of both WBA measurements was high.

from #Audiology via ola Kala on Inoreader http://article/60/10/2965/2654699/Pressurized-Wideband-Absorbance-Findings-in
via IFTTT

Children's Comprehension of Object Relative Sentences: It's Extant Language Knowledge That Matters, Not Domain-General Working Memory

Purpose
The aim of this study was to determine whether extant language (lexical) knowledge or domain-general working memory is the better predictor of comprehension of object relative sentences for children with typical development. We hypothesized that extant language knowledge, not domain-general working memory, is the better predictor.
Method
Fifty-three children (ages 9–11 years) completed a word-level verbal working-memory task, indexing extant language (lexical) knowledge; an analog nonverbal working-memory task, representing domain-general working memory; and a hybrid sentence comprehension task incorporating elements of both agent selection and cross-modal picture-priming paradigms. Images of the agent and patient were displayed at the syntactic gap in the object relative sentences, and the children were asked to select the agent of the sentence.
Results
Results of general linear modeling revealed that extant language knowledge accounted for a unique 21.3% of variance in the children's object relative sentence comprehension over and above age (8.3%). Domain-general working memory accounted for a nonsignificant 1.6% of variance.
Conclusions
We interpret the results to suggest that extant language knowledge and not domain-general working memory is a critically important contributor to children's object relative sentence comprehension. Results support a connectionist view of the association between working memory and object relative sentence comprehension.
Supplemental Materials
http://ift.tt/2y59ZcY

from #Audiology via ola Kala on Inoreader http://article/60/10/2865/2654662/Childrens-Comprehension-of-Object-Relative
via IFTTT

Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

Purpose
This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings.
Method
The results from neuroscience and psychoacoustics are reviewed.
Results
In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.”
Conclusions
How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise.
Presentation Video
http://ift.tt/2yzJvBR

from #Audiology via ola Kala on Inoreader http://article/60/10/2976/2659417/Cortical-and-Sensory-Causes-of-Individual
via IFTTT

Home and Community Language Proficiency in Spanish–English Early Bilingual University Students

Purpose
This study assessed home and community language proficiency in Spanish–English bilingual university students to investigate whether the vocabulary gap reported in studies of bilingual children persists into adulthood.
Method
Sixty-five early bilinguals (mean age = 21 years) were assessed in English and Spanish vocabulary and verbal reasoning ability using subtests of the Woodcock-Muñoz Language Survey–Revised (Schrank & Woodcock, 2009). Their English scores were compared to 74 monolinguals matched in age and level of education. Participants also completed a background questionnaire.
Results
Bilinguals scored below the monolingual control group on both subtests, and the difference was larger for vocabulary compared to verbal reasoning. However, bilinguals were close to the population mean for verbal reasoning. Spanish scores were on average lower than English scores, but participants differed widely in their degree of balance. Participants with an earlier age of acquisition of English and more current exposure to English tended to be more dominant in English.
Conclusions
Vocabulary tests in the home or community language may underestimate bilingual university students' true verbal ability and should be interpreted with caution in high-stakes situations. Verbal reasoning ability may be more indicative of a bilingual's verbal ability.

from #Audiology via ola Kala on Inoreader http://article/60/10/2879/2654584/Home-and-Community-Language-Proficiency-in
via IFTTT

Speech Perception in Complex Acoustic Environments: Developmental Effects

Purpose
The ability to hear and understand speech in complex acoustic environments follows a prolonged time course of development. The purpose of this article is to provide a general overview of the literature describing age effects in susceptibility to auditory masking in the context of speech recognition, including a summary of findings related to the maturation of processes thought to facilitate segregation of target from competing speech.
Method
Data from published and ongoing studies are discussed, with a focus on synthesizing results from studies that address age-related changes in the ability to perceive speech in the presence of a small number of competing talkers.
Conclusions
This review provides a summary of the current state of knowledge that is valuable for researchers and clinicians. It highlights the importance of considering listener factors, such as age and hearing status, as well as stimulus factors, such as masker type, when interpreting masked speech recognition data.
Presentation Video
http://ift.tt/2x83tC0

from #Audiology via ola Kala on Inoreader http://article/60/10/3001/2659419/Speech-Perception-in-Complex-Acoustic-Environments
via IFTTT

Working Memory and Speech Comprehension in Older Adults With Hearing Impairment

Purpose
This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB).
Method
Twenty-four older (59–73 years) adults with sensorineural HI participated. WM capacity (WMC) was measured using 3 complex span tasks. Speech comprehension was assessed using multiple passages, and speech identification ability was measured using recall of sentence final-word and key words. Speech measures were performed in quiet and in the presence of MTB at + 5 dB signal-to-noise ratio.
Results
Results suggested that participants' speech identification was poorer in MTB, but their ability to comprehend discourse in MTB was at least as good as in quiet. WMC did not explain significant variance in speech comprehension before and after controlling for age and audibility. However, WMC explained significant variance in low-context sentence key words identification in MTB.
Conclusions
These results suggest that WMC plays an important role in identifying low-context sentences in MTB, but not when comprehending semantically rich discourse passages. In general, data did not support individual variability in WMC as a factor that predicts speech comprehension ability in older adults with HI.

from #Audiology via ola Kala on Inoreader http://article/60/10/2949/2657619/Working-Memory-and-Speech-Comprehension-in-Older
via IFTTT

Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

Purpose
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article.
Method
This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources.
Results
The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations.
Conclusions
Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation.
Presentation Video
http://ift.tt/2yzJXA3

from #Audiology via ola Kala on Inoreader http://article/60/10/3027/2659422/Enhancing-Auditory-Selective-Attention-Using-a
via IFTTT

Investigating the Role of Salivary Cortisol on Vocal Symptoms

Purpose
We investigated whether participants who reported more often occurring vocal symptoms showed higher salivary cortisol levels and if such possible associations were different for men and women.
Method
The participants (N = 170; men n = 49, women n = 121) consisted of a population-based sample of Finnish twins born between 1961 and 1989. The participants submitted saliva samples for hormone analysis and completed a web questionnaire including questions regarding the occurrence of 6 vocal symptoms during the past 12 months. The data were analyzed using the generalized estimated equations method.
Results
A composite variable of the vocal symptoms showed a significant positive association with salivary cortisol levels (p < .001). Three of the 6 vocal symptoms were significantly associated with the level of cortisol when analyzed separately (p values less than .05). The results showed no gender difference regarding the effect of salivary cortisol on vocal symptoms.
Conclusions
There was a positive association between the occurrence of vocal symptoms and salivary cortisol levels. Participants with higher cortisol levels reported more often occurring vocal symptoms. This could have a connection to the influence of stress on vocal symptoms because stress is a known risk factor of vocal symptoms and salivary cortisol can be seen as a biomarker for stress.

from #Audiology via ola Kala on Inoreader http://article/60/10/2781/2654587/Investigating-the-Role-of-Salivary-Cortisol-on
via IFTTT

Auditory Scene Analysis: An Attention Perspective

Purpose
This review article provides a new perspective on the role of attention in auditory scene analysis.
Method
A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity.
Conclusions
A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources.
Presentation Video
http://ift.tt/2x8vHwE

from #Audiology via ola Kala on Inoreader http://article/60/10/2989/2659418/Auditory-Scene-Analysis-An-Attention-Perspective
via IFTTT

The Influence of Executive Functions on Phonemic Processing in Children Who Do and Do Not Stutter

Purpose
The aim of the present study was to investigate dual-task performance in children who stutter (CWS) and those who do not to investigate if the groups differed in the ability to attend and allocate cognitive resources effectively during task performance.
Method
Participants were 24 children (12 CWS) in both groups matched for age and sex. For the primary task, participants performed a phoneme monitoring in a picture–written word interference task. For the secondary task, participants made pitch judgments on tones presented at varying (short, long) stimulus onset asynchrony (SOA) from the onset of the picture.
Results
The CWS were comparable to the children who do not stutter in performing the monitoring task although the SOA-based performance differences in this task were more variable in the CWS. The CWS were also significantly slower in making tone decisions at the short SOA and showed a trend for making more errors in this task.
Conclusions
The findings are interpreted to suggest higher dual-task cost effects in CWS. A potential explanation for this finding requiring further testing and confirmation is that the CWS show reduced efficiency in attending to the tone stimuli while simultaneously prioritizing attention to the phoneme-monitoring task.

from #Audiology via ola Kala on Inoreader http://article/60/10/2792/2654663/The-Influence-of-Executive-Functions-on-Phonemic
via IFTTT

Error Type and Lexical Frequency Effects: Error Detection in Swedish Children With Language Impairment

Purpose
The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies.
Method
Error sensitivity for grammatical structures vulnerable in Swedish-speaking preschool children with LI (omission of the indefinite article in a noun phrase with a neuter/common noun, and use of the infinitive instead of past-tense regular and irregular verbs) was compared to a control error (singular noun instead of plural). Target structures involved a high-frequency (HF) or a low-frequency (LF) noun/verb. Grammatical and ungrammatical sentences were presented in headphones, and responses were collected through button presses.
Results
Children with LI had similar sensitivity to the plural control error as peers with typical language development, but lower sensitivity to past-tense errors and noun phrase errors. All children showed lexical frequency effects for errors involving verbs (HF > LF), and noun gender effects for noun phrase errors (common > neuter).
Conclusions
School-age children with LI may have subtle difficulties with morphosyntactic processing that mirror expressive difficulties in preschool children with LI. Lexical frequency may affect morphosyntactic processing, which has clinical implications for assessment of grammatical knowledge.

from #Audiology via ola Kala on Inoreader http://article/60/10/2924/2654583/Error-Type-and-Lexical-Frequency-Effects-Error
via IFTTT

Architecture of the Suprahyoid Muscles: A Volumetric Musculoaponeurotic Analysis

Purpose
Suprahyoid muscles play a critical role in swallowing. The arrangement of the fiber bundles and aponeuroses has not been investigated volumetrically, even though muscle architecture is an important determinant of function. Thus, the purpose was to digitize, model in three dimensions, and quantify the architectural parameters of the suprahyoid muscles to determine and compare their relative functional capabilities.
Method
Fiber bundles and aponeuroses from 11 formalin-embalmed specimens were serially dissected and digitized in situ. Data were reconstructed in three dimensions using Autodesk Maya. Architectural parameters were quantified, and data were compared using independent samples t-tests and analyses of variance.
Results
Based on architecture and attachment sites, suprahyoid muscles were divided into 3 groups: anteromedial, superolateral, and superoposterior. Architectural parameters differed significantly (p < .05) across muscles and across the 3 groups, suggesting differential roles in hyoid movement during swallowing. When activated simultaneously, anteromedial and superoposterior muscle groups could work together to elevate the hyoid.
Conclusions
The results suggest that the suprahyoid muscles can have individualized roles in hyoid excursion during swallowing. Muscle balance may be important for identifying and treating hyolaryngeal dysfunction in patients with dysphagia.

from #Audiology via ola Kala on Inoreader http://article/60/10/2808/2655032/Architecture-of-the-Suprahyoid-Muscles-A
via IFTTT

Introduction to the Research Symposium Forum

Purpose
The purpose of this introduction is to provide an overview of the articles contained within this research forum of JSLHR. Each of these articles is based upon presentations from the 2016 ASHA Research Symposium.

from #Audiology via ola Kala on Inoreader http://article/60/10/2974/2659416/Introduction-to-the-Research-Symposium-Forum
via IFTTT

Ordinary Interactions Challenge Proposals That Maternal Verbal Responses Shape Infant Vocal Development

Purpose
This study tested proposals that maternal verbal responses shape infant vocal development, proposals based in part on evidence that infants modified their vocalizations to match mothers' experimentally manipulated vowel or consonant–vowel responses to most (i.e., 70%–80%) infant vocalizations. We tested the proposal in ordinary rather than experimentally manipulated interactions.
Method
Response-based proposals were tested in a cross-sectional study of 35 infants, ages 4 to 14 months, engaged in everyday interactions in their homes with their mothers using a standard set of toys and picture books.
Results
Mothers responded to 30% of infant vocalizations with vocal behaviors of their own, far fewer than experimentally manipulated response rates. Moreover, mothers produced comparatively few vowel and consonant–vowel models and responded to infants' vowel and consonant–vowel vocalizations in similar numbers. Infants showed little evidence of systematically modifying their vocal forms to match maternal responses in these interactions. Instead, consonant–vowel vocalizations increased significantly with infant age.
Conclusions
Results obtained in ordinary interactions, rather than response manipulation, did not provide substantial support for response-based mechanisms of infant vocal development. Consistent with other research, however, consonant–vowel productions increased with infant age.

from #Audiology via ola Kala on Inoreader http://article/60/10/2819/2655031/Ordinary-Interactions-Challenge-Proposals-That
via IFTTT

Age-Related Changes in Objective and Subjective Speech Perception in Complex Listening Environments

Purpose
A frequent complaint by older adults is difficulty communicating in challenging acoustic environments. The purpose of this work was to review and summarize information about how speech perception in complex listening situations changes across the adult age range.
Method
This article provides a review of age-related changes in speech understanding in complex listening environments and summarizes results from several studies conducted in our laboratory.
Results
Both degree of high frequency hearing loss and cognitive test performance limit individuals' ability to understand speech in difficult listening situations as they age. The performance of middle-aged adults is similar to that of younger adults in the presence of noise maskers, but they experience substantially more difficulty when the masker is 1 or 2 competing speech messages. For the most part, middle-aged participants in studies conducted in our laboratory reported as much self-perceived hearing problems as did older adult participants.
Conclusions
Research supports the multifactorial nature of listening in real-world environments. Current audiologic assessment practices are often insufficient to identify the true speech understanding struggles that individuals experience in these situations. This points to the importance of giving weight to patients' self-reported difficulties.
Presentation Video
http://ift.tt/2yzJJZJ

from #Audiology via ola Kala on Inoreader http://article/60/10/3009/2659420/AgeRelated-Changes-in-Objective-and-Subjective
via IFTTT

The History of Stuttering by 7 Years of Age: Follow-Up of a Prospective Community Cohort

Purpose
For a community cohort of children confirmed to have stuttered by the age of 4 years, we report (a) the recovery rate from stuttering, (b) predictors of recovery, and (c) comorbidities at the age of 7 years.
Method
This study was nested in the Early Language in Victoria Study. Predictors of stuttering recovery included child, family, and environmental measures and first-degree relative history of stuttering. Comorbidities examined at 7 years included temperament, language, nonverbal cognition, and health-related quality of life.
Results
The recovery rate by the age of 7 years was 65%. Girls with stronger communication skills at the age of 2 years had higher odds of recovery (adjusted OR = 7.1, 95% CI [1.3, 37.9], p = .02), but similar effects were not evident for boys (adjusted OR = 0.5, 95% CI [0.3, 1.1], p = .10). At the age of 7 years, children who had recovered from stuttering were more likely to have stronger language skills than children whose stuttering persisted (p = .05). No evident differences were identified on other outcomes including nonverbal cognition, temperament, and parent-reported quality of life.
Conclusion
Overall, findings suggested that there may be associations between language ability and recovery from stuttering. Subsequent research is needed to explore the directionality of this relationship.

from #Audiology via ola Kala on Inoreader http://article/60/10/2828/2657162/The-History-of-Stuttering-by-7-Years-of-Age
via IFTTT

Effect of Linguistic and Musical Experience on Distributional Learning of Nonnative Lexical Tones

Purpose
Evidence suggests that extensive experience with lexical tones or musical training provides an advantage in perceiving nonnative lexical tones. This investigation concerns whether such an advantage is evident in learning nonnative lexical tones based on the distributional structure of the input.
Method
Using an established protocol, distributional learning of lexical tones was investigated with tone language (Mandarin) listeners with no musical training (Experiment 1) and nontone language (Australian English) listeners with musical training (Experiment 2). Within each experiment, participants were trained on a bimodal (2-peak) or a unimodal (single peak) distribution along a continuum spanning a Thai lexical tone minimal pair. Discrimination performance on the target minimal pair was assessed before and after training.
Results
Mandarin nonmusicians exhibited clear distributional learning (listeners in the bimodal, but not those in the unimodal condition, improved significantly as a function of training), whereas Australian English musicians did not (listeners in both the bimodal and unimodal conditions improved as a function of training).
Conclusions
Our findings suggest that veridical perception of lexical tones is not sufficient for distributional learning of nonnative lexical tones to occur. Rather, distributional learning appears to be modulated by domain-specific pitch experience and is constrained possibly by top-down interference.

from #Audiology via ola Kala on Inoreader http://article/60/10/2769/2610303/Effect-of-Linguistic-and-Musical-Experience-on
via IFTTT

The Effect of Stimulus Variability on Learning and Generalization of Reading in a Novel Script

Purpose
The benefit of stimulus variability for generalization of acquired skills and knowledge has been shown in motor, perceptual, and language learning but has rarely been studied in reading. We studied the effect of variable training in a novel language on reading trained and untrained words.
Method
Sixty typical adults received 2 sessions of training in reading an artificial script. Participants were assigned to 1 of 3 groups: a variable training group practicing a large set of 24 words, and 2 nonvariable training groups practicing a smaller set of 12 words, with twice the number of repetitions per word.
Results
Variable training resulted in higher accuracy for both trained and untrained items composed of the same graphemes, compared to the nonvariable training. Moreover, performance on untrained items was correlated with phonemic awareness only for the nonvariable training groups.
Conclusions
High stimulus variability increases the reliance on small unit decoding in adults reading in a novel script, which is beneficial for both familiar and novel words. These results show that the statistical properties of the input during reading acquisition influence the type of acquired knowledge and have theoretical and practical implications for planning efficient reading instruction methods.
Supplemental Material
http://ift.tt/2h7vgMh

from #Audiology via ola Kala on Inoreader http://article/60/10/2840/2654585/The-Effect-of-Stimulus-Variability-on-Learning-and
via IFTTT

A Systematic Review and Meta-Analysis of Predictors of Expressive-Language Outcomes Among Late Talkers

Purpose
The purpose of this study was to explore the literature on predictors of outcomes among late talkers using systematic review and meta-analysis methods. We sought to answer the question: What factors predict preschool-age expressive-language outcomes among late-talking toddlers?
Method
We entered carefully selected search terms into the following electronic databases: Communication & Mass Media Complete, ERIC, Medline, PsycEXTRA, Psychological and Behavioral Sciences, and PsycINFO. We conducted a separate, random-effects model meta-analysis for each individual predictor that was used in a minimum of 5 studies. We also tested potential moderators of the relationship between predictors and outcomes using metaregression and subgroup analysis. Last, we conducted publication-bias and sensitivity analyses.
Results
We identified 20 samples, comprising 2,134 children, in a systematic review. According to the results of the meta-analyses, significant predictors of expressive-language outcomes included toddlerhood expressive-vocabulary size, receptive language, and socioeconomic status. Nonsignificant predictors included phrase speech, gender, and family history.
Conclusions
To our knowledge this is the first synthesis of the literature on predictors of outcomes among late talkers using meta-analysis. Our findings clarify the contributions of several constructs to outcomes and highlight the importance of early receptive language to expressive-language development.
Supplemental Materials
http://ift.tt/2yeQuj2

from #Audiology via ola Kala on Inoreader http://article/60/10/2935/2654661/A-Systematic-Review-and-MetaAnalysis-of-Predictors
via IFTTT

Language Sample Analysis and Elicitation Technique Effects in Bilingual Children With and Without Language Impairment

Purpose
This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish–English-speaking children in the United States.
Method
Twenty bilingual Spanish–English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques.
Results
D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling.
Conclusion
Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish–English-speaking children.

from #Audiology via ola Kala on Inoreader http://article/60/10/2852/2654586/Language-Sample-Analysis-and-Elicitation-Technique
via IFTTT

Pressurized Wideband Absorbance Findings in Healthy Neonates: A Preliminary Study

Purpose
The present study aimed to establish normative data for wideband absorbance (WBA) measured at tympanometric peak pressure (TPP) and 0 daPa and to assess the test–retest reliability of both measurements in healthy neonates.
Method
Participants of this cross-sectional study included 99 full-term neonates (165 ears) with mean chronological age of 46.7 hrs (SD = 26.3 hrs). Of the 99 neonates, 58 were Malay, 28 were Indian, and 13 were Chinese. The neonates who passed high-frequency (1 kHz) tympanometry, acoustic stapedial reflex, and distortion product otoacoustic emission screening tests were assessed using a pressurized WBA test (wideband tympanometry). To reduce the number of measurement points, the WBA responses were averaged to 16 one-third octave frequency bands from 0.25 to 8 kHz. A mixed-model analysis of variance was applied to the data to investigate the effects of frequency, ear, gender, and ethnicity on WBA. The analysis of variance was also used to compare between WBA measured at TPP and 0 daPa. An interclass correlation coefficient test was applied at each of the 16 frequency bands to measure the test–retest reliability of WBA at TPP and 0 daPa.
Results
Both WBA measurements at TPP and 0 daPa exhibited a multipeaked pattern with 2 maxima at 1.25–1.6 kHz and 6.3 kHz and 2 minima at 0.5 and 4 kHz. The mean WBA measured at TPP was significantly higher than that measured at 0 daPa at 0.25, 0.4, 0.5, 1.25, and 1.6 kHz only. A normative data set was developed for absorbance at TPP and at 0 daPa. There was no significant effect of ethnicity, gender, and ear on both measurements of WBA. The test–retest reliability of WBA at TPP and 0 daPa was high with the interclass correlation coefficient ranging from 0.77 to 0.97 across the frequencies.
Conclusions
Normative data of WBA measured at TPP and 0 daPa for neonates were provided in the present study. Although WBA at TPP was slightly higher than the WBA measured at 0 daPa at some frequencies below 2 kHz, the WBA patterns of the 2 measurements were nearly identical. Moreover, the test–retest reliability of both WBA measurements was high.

from #Audiology via ola Kala on Inoreader http://article/60/10/2965/2654699/Pressurized-Wideband-Absorbance-Findings-in
via IFTTT

Children's Comprehension of Object Relative Sentences: It's Extant Language Knowledge That Matters, Not Domain-General Working Memory

Purpose
The aim of this study was to determine whether extant language (lexical) knowledge or domain-general working memory is the better predictor of comprehension of object relative sentences for children with typical development. We hypothesized that extant language knowledge, not domain-general working memory, is the better predictor.
Method
Fifty-three children (ages 9–11 years) completed a word-level verbal working-memory task, indexing extant language (lexical) knowledge; an analog nonverbal working-memory task, representing domain-general working memory; and a hybrid sentence comprehension task incorporating elements of both agent selection and cross-modal picture-priming paradigms. Images of the agent and patient were displayed at the syntactic gap in the object relative sentences, and the children were asked to select the agent of the sentence.
Results
Results of general linear modeling revealed that extant language knowledge accounted for a unique 21.3% of variance in the children's object relative sentence comprehension over and above age (8.3%). Domain-general working memory accounted for a nonsignificant 1.6% of variance.
Conclusions
We interpret the results to suggest that extant language knowledge and not domain-general working memory is a critically important contributor to children's object relative sentence comprehension. Results support a connectionist view of the association between working memory and object relative sentence comprehension.
Supplemental Materials
http://ift.tt/2y59ZcY

from #Audiology via ola Kala on Inoreader http://article/60/10/2865/2654662/Childrens-Comprehension-of-Object-Relative
via IFTTT

Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

Purpose
This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings.
Method
The results from neuroscience and psychoacoustics are reviewed.
Results
In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.”
Conclusions
How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise.
Presentation Video
http://ift.tt/2yzJvBR

from #Audiology via ola Kala on Inoreader http://article/60/10/2976/2659417/Cortical-and-Sensory-Causes-of-Individual
via IFTTT

Home and Community Language Proficiency in Spanish–English Early Bilingual University Students

Purpose
This study assessed home and community language proficiency in Spanish–English bilingual university students to investigate whether the vocabulary gap reported in studies of bilingual children persists into adulthood.
Method
Sixty-five early bilinguals (mean age = 21 years) were assessed in English and Spanish vocabulary and verbal reasoning ability using subtests of the Woodcock-Muñoz Language Survey–Revised (Schrank & Woodcock, 2009). Their English scores were compared to 74 monolinguals matched in age and level of education. Participants also completed a background questionnaire.
Results
Bilinguals scored below the monolingual control group on both subtests, and the difference was larger for vocabulary compared to verbal reasoning. However, bilinguals were close to the population mean for verbal reasoning. Spanish scores were on average lower than English scores, but participants differed widely in their degree of balance. Participants with an earlier age of acquisition of English and more current exposure to English tended to be more dominant in English.
Conclusions
Vocabulary tests in the home or community language may underestimate bilingual university students' true verbal ability and should be interpreted with caution in high-stakes situations. Verbal reasoning ability may be more indicative of a bilingual's verbal ability.

from #Audiology via ola Kala on Inoreader http://article/60/10/2879/2654584/Home-and-Community-Language-Proficiency-in
via IFTTT

Speech Perception in Complex Acoustic Environments: Developmental Effects

Purpose
The ability to hear and understand speech in complex acoustic environments follows a prolonged time course of development. The purpose of this article is to provide a general overview of the literature describing age effects in susceptibility to auditory masking in the context of speech recognition, including a summary of findings related to the maturation of processes thought to facilitate segregation of target from competing speech.
Method
Data from published and ongoing studies are discussed, with a focus on synthesizing results from studies that address age-related changes in the ability to perceive speech in the presence of a small number of competing talkers.
Conclusions
This review provides a summary of the current state of knowledge that is valuable for researchers and clinicians. It highlights the importance of considering listener factors, such as age and hearing status, as well as stimulus factors, such as masker type, when interpreting masked speech recognition data.
Presentation Video
http://ift.tt/2x83tC0

from #Audiology via ola Kala on Inoreader http://article/60/10/3001/2659419/Speech-Perception-in-Complex-Acoustic-Environments
via IFTTT

Working Memory and Speech Comprehension in Older Adults With Hearing Impairment

Purpose
This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB).
Method
Twenty-four older (59–73 years) adults with sensorineural HI participated. WM capacity (WMC) was measured using 3 complex span tasks. Speech comprehension was assessed using multiple passages, and speech identification ability was measured using recall of sentence final-word and key words. Speech measures were performed in quiet and in the presence of MTB at + 5 dB signal-to-noise ratio.
Results
Results suggested that participants' speech identification was poorer in MTB, but their ability to comprehend discourse in MTB was at least as good as in quiet. WMC did not explain significant variance in speech comprehension before and after controlling for age and audibility. However, WMC explained significant variance in low-context sentence key words identification in MTB.
Conclusions
These results suggest that WMC plays an important role in identifying low-context sentences in MTB, but not when comprehending semantically rich discourse passages. In general, data did not support individual variability in WMC as a factor that predicts speech comprehension ability in older adults with HI.

from #Audiology via xlomafota13 on Inoreader http://article/60/10/2949/2657619/Working-Memory-and-Speech-Comprehension-in-Older
via IFTTT

Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

Purpose
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article.
Method
This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources.
Results
The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations.
Conclusions
Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation.
Presentation Video
http://ift.tt/2yzJXA3

from #Audiology via xlomafota13 on Inoreader http://article/60/10/3027/2659422/Enhancing-Auditory-Selective-Attention-Using-a
via IFTTT

Investigating the Role of Salivary Cortisol on Vocal Symptoms

Purpose
We investigated whether participants who reported more often occurring vocal symptoms showed higher salivary cortisol levels and if such possible associations were different for men and women.
Method
The participants (N = 170; men n = 49, women n = 121) consisted of a population-based sample of Finnish twins born between 1961 and 1989. The participants submitted saliva samples for hormone analysis and completed a web questionnaire including questions regarding the occurrence of 6 vocal symptoms during the past 12 months. The data were analyzed using the generalized estimated equations method.
Results
A composite variable of the vocal symptoms showed a significant positive association with salivary cortisol levels (p < .001). Three of the 6 vocal symptoms were significantly associated with the level of cortisol when analyzed separately (p values less than .05). The results showed no gender difference regarding the effect of salivary cortisol on vocal symptoms.
Conclusions
There was a positive association between the occurrence of vocal symptoms and salivary cortisol levels. Participants with higher cortisol levels reported more often occurring vocal symptoms. This could have a connection to the influence of stress on vocal symptoms because stress is a known risk factor of vocal symptoms and salivary cortisol can be seen as a biomarker for stress.

from #Audiology via xlomafota13 on Inoreader http://article/60/10/2781/2654587/Investigating-the-Role-of-Salivary-Cortisol-on
via IFTTT

Auditory Scene Analysis: An Attention Perspective

Purpose
This review article provides a new perspective on the role of attention in auditory scene analysis.
Method
A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity.
Conclusions
A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources.
Presentation Video
http://ift.tt/2x8vHwE

from #Audiology via xlomafota13 on Inoreader http://article/60/10/2989/2659418/Auditory-Scene-Analysis-An-Attention-Perspective
via IFTTT

The Influence of Executive Functions on Phonemic Processing in Children Who Do and Do Not Stutter

Purpose
The aim of the present study was to investigate dual-task performance in children who stutter (CWS) and those who do not to investigate if the groups differed in the ability to attend and allocate cognitive resources effectively during task performance.
Method
Participants were 24 children (12 CWS) in both groups matched for age and sex. For the primary task, participants performed a phoneme monitoring in a picture–written word interference task. For the secondary task, participants made pitch judgments on tones presented at varying (short, long) stimulus onset asynchrony (SOA) from the onset of the picture.
Results
The CWS were comparable to the children who do not stutter in performing the monitoring task although the SOA-based performance differences in this task were more variable in the CWS. The CWS were also significantly slower in making tone decisions at the short SOA and showed a trend for making more errors in this task.
Conclusions
The findings are interpreted to suggest higher dual-task cost effects in CWS. A potential explanation for this finding requiring further testing and confirmation is that the CWS show reduced efficiency in attending to the tone stimuli while simultaneously prioritizing attention to the phoneme-monitoring task.

from #Audiology via xlomafota13 on Inoreader http://article/60/10/2792/2654663/The-Influence-of-Executive-Functions-on-Phonemic
via IFTTT

Error Type and Lexical Frequency Effects: Error Detection in Swedish Children With Language Impairment

Purpose
The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies.
Method
Error sensitivity for grammatical structures vulnerable in Swedish-speaking preschool children with LI (omission of the indefinite article in a noun phrase with a neuter/common noun, and use of the infinitive instead of past-tense regular and irregular verbs) was compared to a control error (singular noun instead of plural). Target structures involved a high-frequency (HF) or a low-frequency (LF) noun/verb. Grammatical and ungrammatical sentences were presented in headphones, and responses were collected through button presses.
Results
Children with LI had similar sensitivity to the plural control error as peers with typical language development, but lower sensitivity to past-tense errors and noun phrase errors. All children showed lexical frequency effects for errors involving verbs (HF > LF), and noun gender effects for noun phrase errors (common > neuter).
Conclusions
School-age children with LI may have subtle difficulties with morphosyntactic processing that mirror expressive difficulties in preschool children with LI. Lexical frequency may affect morphosyntactic processing, which has clinical implications for assessment of grammatical knowledge.

from #Audiology via xlomafota13 on Inoreader http://article/60/10/2924/2654583/Error-Type-and-Lexical-Frequency-Effects-Error
via IFTTT

Architecture of the Suprahyoid Muscles: A Volumetric Musculoaponeurotic Analysis

Purpose
Suprahyoid muscles play a critical role in swallowing. The arrangement of the fiber bundles and aponeuroses has not been investigated volumetrically, even though muscle architecture is an important determinant of function. Thus, the purpose was to digitize, model in three dimensions, and quantify the architectural parameters of the suprahyoid muscles to determine and compare their relative functional capabilities.
Method
Fiber bundles and aponeuroses from 11 formalin-embalmed specimens were serially dissected and digitized in situ. Data were reconstructed in three dimensions using Autodesk Maya. Architectural parameters were quantified, and data were compared using independent samples t-tests and analyses of variance.
Results
Based on architecture and attachment sites, suprahyoid muscles were divided into 3 groups: anteromedial, superolateral, and superoposterior. Architectural parameters differed significantly (p < .05) across muscles and across the 3 groups, suggesting differential roles in hyoid movement during swallowing. When activated simultaneously, anteromedial and superoposterior muscle groups could work together to elevate the hyoid.
Conclusions
The results suggest that the suprahyoid muscles can have individualized roles in hyoid excursion during swallowing. Muscle balance may be important for identifying and treating hyolaryngeal dysfunction in patients with dysphagia.

from #Audiology via xlomafota13 on Inoreader http://article/60/10/2808/2655032/Architecture-of-the-Suprahyoid-Muscles-A
via IFTTT