Παρασκευή 18 Μαΐου 2018

Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment

Purpose
This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech.
Method
The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI).
Results
In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH.
Conclusions
Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so.
Supplemental Material
https://doi.org/10.23641/asha.6118817

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjdRA
via IFTTT

Auditory–Perceptual Assessment of Fluency in Typical and Neurologically Disordered Speech

Purpose
The aim of this study is to investigate how speech fluency in typical and atypical speech is perceptually assessed by speech-language pathologists (SLPs). Our research questions were as follows: (a) How do SLPs rate fluency in speakers with and without neurological communication disorders? (b) Do they differentiate the speaker groups? and (c) What features do they hear impairing speech fluency?
Method
Ten SLPs specialized in neurological communication disorders volunteered as expert judges to rate 90 narrative speech samples on a Visual Analogue Scale (see Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009; p. 127). The samples—randomly mixed—were from 70 neurologically healthy speakers (the control group) and 20 speakers with traumatic brain injury, 10 of whom had neurogenic stuttering (designated as Clinical Groups A and B).
Results
The fluency rates were higher for typical speakers than for speakers with traumatic brain injury; however, the agreement among the judges was higher for atypical fluency. Auditory–perceptual assessment of fluency was significantly impaired by the features of stuttering and something else but not by speech rate. Stuttering was also perceived in speakers not diagnosed as stutterers. A borderline between typical and atypical fluency was found.
Conclusions
Speech fluency is a multifaceted phenomenon, and on the basis of this study, we suggest a more general approach to fluency and its deviations that will take into account, in addition to the motor and linguistic aspects of fluency, the metalinguistic component of expression as well. The results of this study indicate a need for further studies on the precise nature of borderline fluency and its different disfluencies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HKhjAz
via IFTTT

Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory

Purpose
We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations.
Method
Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford–Kowal–Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM.
Results
Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ loaded as a 4th factor.
Conclusions
Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFhdru
via IFTTT

Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

Purpose
The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces.
Method
Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory–acoustic vowel space and the vowel space hull area were also calculated.
Results
Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants.
Conclusions
These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXtO98
via IFTTT

The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon)

Purpose
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon.
Method
A total of 460 participants aged 3–5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist.
Results
Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%.
Conclusion
Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXRjyE
via IFTTT

Kinematic Features of Jaw and Lips Distinguish Symptomatic From Presymptomatic Stages of Bulbar Decline in Amyotrophic Lateral Sclerosis

Purpose
The goals of this study were to (a) classify speech movements of patients with amyotrophic lateral sclerosis (ALS) in presymptomatic and symptomatic phases of bulbar function decline relying solely on kinematic features of lips and jaw and (b) identify the most important measures that detect the transition between early and late bulbar changes.
Method
One hundred ninety-two recordings obtained from 64 patients with ALS were considered for the analysis. Feature selection and classification algorithms were used to analyze lip and jaw movements recorded with Optotrak Certus (Northern Digital Inc.) during a sentence task. A feature set, which included 35 measures of movement range, velocity, acceleration, jerk, and area measures of lips and jaw, was used to classify sessions according to the speaking rate into presymptomatic (> 160 words per minute) and symptomatic (< 160 words per minute) groups.
Results
Presymptomatic and symptomatic phases of bulbar decline were distinguished with high accuracy (87%), relying only on lip and jaw movements. The best features that allowed detecting the differences between early and later bulbar stages included cumulative path of lower lip and jaw, peak values of velocity, acceleration, and jerk of lower lip and jaw.
Conclusion
The results established a relationship between facial kinematics and bulbar function decline in ALS. Considering that facial movements can be recorded by means of novel inexpensive and easy-to-use, video-based methods, this work supports the development of an automatic system for facial movement analysis to help clinicians in tracking the disease progression in ALS.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rt9o3N
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2GXZJfi
via IFTTT

Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

Purpose
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression.
Study Design
Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos.
Method
A diffusive behavior detection–based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions.
Results
Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study.
Conclusion
The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

from #Audiology via ola Kala on Inoreader https://ift.tt/2js0PC4
via IFTTT

Weighting of Amplitude and Formant Rise Time Cues by School-Aged Children: A Mismatch Negativity Study

Purpose
An important skill in the development of speech perception is to apply optimal weights to acoustic cues so that phonemic information is recovered from speech with minimum effort. Here, we investigated the development of acoustic cue weighting of amplitude rise time (ART) and formant rise time (FRT) cues in children as measured by mismatch negativity (MMN).
Method
Twelve adults and 36 children aged 6–12 years listened to a /ba/–/wa/ contrast in an oddball paradigm in which the standard stimulus had the ART and FRT cues of /ba/. In different blocks, the deviant stimulus had either the ART or FRT cues of /wa/.
Results
The results revealed that children younger than 10 years were sensitive to both ART and FRT cues whereas 10- to 12-year-old children and adults were sensitive only to FRT cues. Moreover, children younger than 10 years generated a positive mismatch response, whereas older children and adults generated MMN.
Conclusion
These results suggest that preattentive adultlike weighting of ART and FRT cues is attained only by 10 years of age and accompanies the change from mismatch response to the more mature MMN response.
Supplemental Material
https://doi.org/10.23641/asha.6207608

from #Audiology via ola Kala on Inoreader https://ift.tt/2I6GI7b
via IFTTT

Effects of a Tablet-Based Home Practice Program With Telepractice on Treatment Outcomes in Chronic Aphasia

Purpose
The aim of this study was to determine if a tablet-based home practice program with weekly telepractice support could enable long-term maintenance of recent treatment gains and foster new language gains in poststroke aphasia.
Method
In a pre–post group study of home practice outcomes, 21 individuals with chronic aphasia were examined before and after a 6-month home practice phase and again at follow-up 4 months later. The main outcome measure studied was change in naming previously treated or untreated, practiced or unpracticed pictures of objects and actions. Individualized home practice programs were created in iBooks Author with semantic, phonemic, and orthographic cueing in pictures, words, and videos in order to facilitate naming of previously treated or untreated pictures.
Results
Home practice was effective for all participants with severity moderating treatment effects, such that individuals with the most severe aphasia made and maintained fewer gains. There was a negative relationship between the amount of training required for iPad proficiency and improvements on practiced and unpracticed pictures and a positive relationship between practice compliance and same improvements.
Conclusion
Unsupervised home practice with weekly video teleconferencing support is effective. This study demonstrates that even individuals with chronic severe aphasia, including those with no prior smart device or even computer experience, can attain independent proficiency to continue practicing and improving their language skills beyond therapy discharge. This could represent a low-cost therapy option for individuals without insurance coverage and/or those for whom mobility is an obstacle to obtaining traditional aphasia therapy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2vFLNBk
via IFTTT

Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

Purpose
The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD).
Method
Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant–vowel–consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class).
Results
On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns.
Conclusion
The current study yields new insight into how children access lexical–phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical–phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KAlShR
via IFTTT

Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs

Purpose
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.
Method
Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.
Results
The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome.
Conclusion
The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.
Supplemental Material
https://doi.org/10.23641/asha.6066545

from #Audiology via ola Kala on Inoreader https://ift.tt/2vt9GMw
via IFTTT

Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice

Purpose
In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning.
Method
Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker.
Results
No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity.
Discussion
We discuss possible clinical implications of this finding and directions for future research.

from #Audiology via ola Kala on Inoreader https://ift.tt/2jpONJ6
via IFTTT

Morphosyntactic Production and Verbal Working Memory: Evidence From Greek Aphasia and Healthy Aging

Purpose
The present work investigated whether verbal working memory (WM) affects morphosyntactic production in configurations that do not involve or favor similarity-based interference and whether WM interacts with verb-related morphosyntactic categories and/or cue–target distance (locality). It also explored whether the findings related to the questions above lend support to a recent account of agrammatic morphosyntactic production: Interpretable Features' Impairment Hypothesis (Fyndanis, Varlokosta, & Tsapkini, 2012).
Method
A sentence completion task testing production of subject–verb agreement, tense/time reference, and aspect in local and nonlocal conditions and two verbal WM tasks were administered to 8 Greek-speaking persons with agrammatic aphasia (PWA) and 103 healthy participants.
Results
The 3 morphosyntactic categories dissociated in both groups (agreement > tense > aspect). A significant interaction emerged in both groups between the 3 morphosyntactic categories and WM. There was no main effect of locality in either of the 2 groups. At the individual level, all 8 PWA exhibited dissociations between agreement, tense, and aspect, and effects of locality were contradictory.
Conclusions
Results suggest that individuals with WM limitations (both PWA and healthy older speakers) show dissociations between the production of verb-related morphosyntactic categories. WM affects performance shaping the pattern of morphosyntactic production (in Greek: subject–verb agreement > tense > aspect). The absence of an effect of locality suggests that executive capacities tapped by WM tasks are involved in morphosyntactic processing of demanding categories even when the cue is adjacent to the target. Results are consistent with the Interpretable Features' Impairment Hypothesis (Fyndanis et al., 2012).
Supplemental Material
https://doi.org/10.23641/asha.6024428

from #Audiology via ola Kala on Inoreader https://ift.tt/2EQoPqI
via IFTTT

Population Health in Pediatric Speech and Language Disorders: Available Data Sources and a Research Agenda for the Field

Purpose
The aim of the study was to provide an overview of population science as applied to speech and language disorders, illustrate data sources, and advance a research agenda on the epidemiology of these conditions.
Method
Computer-aided database searches were performed to identify key national surveys and other sources of data necessary to establish the incidence, prevalence, and course and outcome of speech and language disorders. This article also summarizes a research agenda that could enhance our understanding of the epidemiology of these disorders.
Results
Although the data yielded estimates of prevalence and incidence for speech and language disorders, existing sources of data are inadequate to establish reliable rates of incidence, prevalence, and outcomes for speech and language disorders at the population level.
Conclusions
Greater support for inclusion of speech and language disorder–relevant questions is necessary in national health surveys to build the population science in the field.

from #Audiology via ola Kala on Inoreader https://ift.tt/2r6p3Vx
via IFTTT

Prosodic Boundary Effects on Syntactic Disambiguation in Children With Cochlear Implants

Purpose
This study investigated prosodic boundary effects on the comprehension of attachment ambiguities in children with cochlear implants (CIs) and normal hearing (NH) and tested the absolute boundary hypothesis and the relative boundary hypothesis. Processing speed was also investigated.
Method
Fifteen children with NH and 13 children with CIs (ages 8–12 years) who are monolingual speakers of Brazilian Portuguese participated in a computerized comprehension task with sentences containing prepositional phrase attachment ambiguity and manipulations of prosodic boundaries.
Results
Children with NH and children with CIs differed in how they used prosodic forms to disambiguate sentences. Children in both groups provided responses consistent with half of the predictions of the relative boundary hypothesis. The absolute boundary hypothesis did not characterize the syntactic disambiguation of children with CIs. Processing speed was similar in both groups.
Conclusions
Children with CIs do not use prosodic information to disambiguate sentences or to facilitate comprehension of unambiguous sentences similarly to children with NH. The results suggest that cross-linguistic differences may interact with syntactic disambiguation. Prosodic contrasts that affect sentence comprehension need to be addressed directly in intervention with children with CIs.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFguGM
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2HGM7EP
via IFTTT

Nonword Repetition and Language Outcomes in Young Children Born Preterm

Purpose
The aims of this study were to examine phonological short-term memory in children born preterm (PT) and to explore relations between this neuropsychological process and later language skills.
Method
Children born PT (n = 74) and full term (FT; n = 60) participated in a nonword repetition (NWR) task at 36 months old. Standardized measures of language skills were administered at 36 and 54 months old. Group differences in NWR task completion and NWR scores were analyzed. Hierarchical multiple regression analyses examined the extent to which NWR ability predicted later performance on language measures.
Results
More children born PT than FT did not complete the NWR task. Among children who completed the task, the performance of children born PT and FT was not statistically different. NWR scores at 36 months old accounted for significant unique variance in language scores at 54 months old in both groups. Birth group did not moderate the relation between NWR and later language performance.
Conclusions
These findings suggest that phonological short-term memory is an important skill underlying language development in both children born PT and FT. These findings have relevance to clinical practice in assessing children born PT.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFh6w4
via IFTTT

A Systematic Review of Semantic Feature Analysis Therapy Studies for Aphasia

Purpose
The purpose of this study was to review treatment studies of semantic feature analysis (SFA) for persons with aphasia. The review documents how SFA is used, appraises the quality of the included studies, and evaluates the efficacy of SFA.
Method
The following electronic databases were systematically searched (last search February 2017): Academic Search Complete, CINAHL Plus, E-journals, Health Policy Reference Centre, MEDLINE, PsycARTICLES, PsycINFO, and SocINDEX. The quality of the included studies was rated. Clinical efficacy was determined by calculating effect sizes (Cohen's d) or percent of nonoverlapping data when d could not be calculated.
Results
Twenty-one studies were reviewed reporting on 55 persons with aphasia. SFA was used in 6 different types of studies: confrontation naming of nouns, confrontation naming of verbs, connected speech/discourse, group, multilingual, and studies where SFA was compared with other approaches. The quality of included studies was high (Single Case Experimental Design Scale average [range] = 9.55 [8.0–11]). Naming of trained items improved for 45 participants (81.82%). Effect sizes indicated that there was a small treatment effect.
Conclusions
SFA leads to positive outcomes despite the variability of treatment procedures, dosage, duration, and variations to the traditional SFA protocol. Further research is warranted to examine the efficacy of SFA and generalization effects in larger controlled studies.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HLjcNw
via IFTTT

Suprasegmental Features Are Not Acquired Early: Perception and Production of Monosyllabic Cantonese Lexical Tones in 4- to 6-Year-Old Preschool Children

Purpose
Previous studies reported that children acquire Cantonese tones before 3 years of age, supporting the assumption in models of phonological development that suprasegmental features are acquired rapidly and early in children. Yet, recent research found a large disparity in the age of Cantonese tone acquisition. This study investigated Cantonese tone development in 4- to 6-year-old children.
Method
Forty-eight 4- to 6-year-old Cantonese-speaking children and 28 mothers of the children labeled 30 pictures representing familiar words in the 6 tones in a picture-naming task and identified pictures representing words in different Cantonese tones in a picture-pointing task. To control for lexical biases in tone assessment, tone productions were low-pass filtered to eliminate lexical information. Five judges categorized the tones in filtered stimuli. Tone production accuracy, tone perception accuracy, and correlation between tone production and perception accuracy were examined.
Results
Children did not start to produce adultlike tones until 5 and 6 years of age. Four-year-olds produced none of the tones with adultlike accuracy. Five- and 6-year-olds attained adultlike productions in 2 (T5 and T6) to 3 (T4, T5, and T6) tones, respectively. Children made better progress in tone perception and achieved higher accuracy in perception than in production. However, children in all age groups perceived none of the tones as accurately as adults, except that T1 was perceived with adultlike accuracy by 6-year-olds. Only weak association was found between children's tone perception and production accuracy.
Conclusions
Contradicting to the long-held assumption that children acquire lexical tone rapidly and early before the mastery of segmentals, this study found that 4- to 6-year-old children have not mastered the perception or production of the full set of Cantonese tones in familiar monosyllabic words. Larger development was found in children's tone perception than tone production. The higher tone perception accuracy but weak correlation between tone perception and production abilities in children suggested that tone perception accuracy is not sufficient for children's tone production accuracy. The findings have clinical and theoretical implications.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HwInqd
via IFTTT

Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment

Purpose
This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech.
Method
The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI).
Results
In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH.
Conclusions
Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so.
Supplemental Material
https://doi.org/10.23641/asha.6118817

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HLjdRA
via IFTTT

Auditory–Perceptual Assessment of Fluency in Typical and Neurologically Disordered Speech

Purpose
The aim of this study is to investigate how speech fluency in typical and atypical speech is perceptually assessed by speech-language pathologists (SLPs). Our research questions were as follows: (a) How do SLPs rate fluency in speakers with and without neurological communication disorders? (b) Do they differentiate the speaker groups? and (c) What features do they hear impairing speech fluency?
Method
Ten SLPs specialized in neurological communication disorders volunteered as expert judges to rate 90 narrative speech samples on a Visual Analogue Scale (see Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009; p. 127). The samples—randomly mixed—were from 70 neurologically healthy speakers (the control group) and 20 speakers with traumatic brain injury, 10 of whom had neurogenic stuttering (designated as Clinical Groups A and B).
Results
The fluency rates were higher for typical speakers than for speakers with traumatic brain injury; however, the agreement among the judges was higher for atypical fluency. Auditory–perceptual assessment of fluency was significantly impaired by the features of stuttering and something else but not by speech rate. Stuttering was also perceived in speakers not diagnosed as stutterers. A borderline between typical and atypical fluency was found.
Conclusions
Speech fluency is a multifaceted phenomenon, and on the basis of this study, we suggest a more general approach to fluency and its deviations that will take into account, in addition to the motor and linguistic aspects of fluency, the metalinguistic component of expression as well. The results of this study indicate a need for further studies on the precise nature of borderline fluency and its different disfluencies.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HKhjAz
via IFTTT

Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory

Purpose
We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations.
Method
Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford–Kowal–Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM.
Results
Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ loaded as a 4th factor.
Conclusions
Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FFhdru
via IFTTT

Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

Purpose
The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces.
Method
Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory–acoustic vowel space and the vowel space hull area were also calculated.
Results
Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants.
Conclusions
These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EXtO98
via IFTTT

The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon)

Purpose
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon.
Method
A total of 460 participants aged 3–5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist.
Results
Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%.
Conclusion
Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EXRjyE
via IFTTT

Kinematic Features of Jaw and Lips Distinguish Symptomatic From Presymptomatic Stages of Bulbar Decline in Amyotrophic Lateral Sclerosis

Purpose
The goals of this study were to (a) classify speech movements of patients with amyotrophic lateral sclerosis (ALS) in presymptomatic and symptomatic phases of bulbar function decline relying solely on kinematic features of lips and jaw and (b) identify the most important measures that detect the transition between early and late bulbar changes.
Method
One hundred ninety-two recordings obtained from 64 patients with ALS were considered for the analysis. Feature selection and classification algorithms were used to analyze lip and jaw movements recorded with Optotrak Certus (Northern Digital Inc.) during a sentence task. A feature set, which included 35 measures of movement range, velocity, acceleration, jerk, and area measures of lips and jaw, was used to classify sessions according to the speaking rate into presymptomatic (> 160 words per minute) and symptomatic (< 160 words per minute) groups.
Results
Presymptomatic and symptomatic phases of bulbar decline were distinguished with high accuracy (87%), relying only on lip and jaw movements. The best features that allowed detecting the differences between early and later bulbar stages included cumulative path of lower lip and jaw, peak values of velocity, acceleration, and jerk of lower lip and jaw.
Conclusion
The results established a relationship between facial kinematics and bulbar function decline in ALS. Considering that facial movements can be recorded by means of novel inexpensive and easy-to-use, video-based methods, this work supports the development of an automatic system for facial movement analysis to help clinicians in tracking the disease progression in ALS.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rt9o3N
via IFTTT

Erratum



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2GXZJfi
via IFTTT

Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

Purpose
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression.
Study Design
Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos.
Method
A diffusive behavior detection–based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions.
Results
Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study.
Conclusion
The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2js0PC4
via IFTTT

Weighting of Amplitude and Formant Rise Time Cues by School-Aged Children: A Mismatch Negativity Study

Purpose
An important skill in the development of speech perception is to apply optimal weights to acoustic cues so that phonemic information is recovered from speech with minimum effort. Here, we investigated the development of acoustic cue weighting of amplitude rise time (ART) and formant rise time (FRT) cues in children as measured by mismatch negativity (MMN).
Method
Twelve adults and 36 children aged 6–12 years listened to a /ba/–/wa/ contrast in an oddball paradigm in which the standard stimulus had the ART and FRT cues of /ba/. In different blocks, the deviant stimulus had either the ART or FRT cues of /wa/.
Results
The results revealed that children younger than 10 years were sensitive to both ART and FRT cues whereas 10- to 12-year-old children and adults were sensitive only to FRT cues. Moreover, children younger than 10 years generated a positive mismatch response, whereas older children and adults generated MMN.
Conclusion
These results suggest that preattentive adultlike weighting of ART and FRT cues is attained only by 10 years of age and accompanies the change from mismatch response to the more mature MMN response.
Supplemental Material
https://doi.org/10.23641/asha.6207608

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2I6GI7b
via IFTTT

Effects of a Tablet-Based Home Practice Program With Telepractice on Treatment Outcomes in Chronic Aphasia

Purpose
The aim of this study was to determine if a tablet-based home practice program with weekly telepractice support could enable long-term maintenance of recent treatment gains and foster new language gains in poststroke aphasia.
Method
In a pre–post group study of home practice outcomes, 21 individuals with chronic aphasia were examined before and after a 6-month home practice phase and again at follow-up 4 months later. The main outcome measure studied was change in naming previously treated or untreated, practiced or unpracticed pictures of objects and actions. Individualized home practice programs were created in iBooks Author with semantic, phonemic, and orthographic cueing in pictures, words, and videos in order to facilitate naming of previously treated or untreated pictures.
Results
Home practice was effective for all participants with severity moderating treatment effects, such that individuals with the most severe aphasia made and maintained fewer gains. There was a negative relationship between the amount of training required for iPad proficiency and improvements on practiced and unpracticed pictures and a positive relationship between practice compliance and same improvements.
Conclusion
Unsupervised home practice with weekly video teleconferencing support is effective. This study demonstrates that even individuals with chronic severe aphasia, including those with no prior smart device or even computer experience, can attain independent proficiency to continue practicing and improving their language skills beyond therapy discharge. This could represent a low-cost therapy option for individuals without insurance coverage and/or those for whom mobility is an obstacle to obtaining traditional aphasia therapy.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2vFLNBk
via IFTTT

Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

Purpose
The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD).
Method
Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant–vowel–consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class).
Results
On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns.
Conclusion
The current study yields new insight into how children access lexical–phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical–phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2KAlShR
via IFTTT

Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs

Purpose
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.
Method
Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.
Results
The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome.
Conclusion
The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.
Supplemental Material
https://doi.org/10.23641/asha.6066545

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2vt9GMw
via IFTTT

Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice

Purpose
In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning.
Method
Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker.
Results
No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity.
Discussion
We discuss possible clinical implications of this finding and directions for future research.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2jpONJ6
via IFTTT

Morphosyntactic Production and Verbal Working Memory: Evidence From Greek Aphasia and Healthy Aging

Purpose
The present work investigated whether verbal working memory (WM) affects morphosyntactic production in configurations that do not involve or favor similarity-based interference and whether WM interacts with verb-related morphosyntactic categories and/or cue–target distance (locality). It also explored whether the findings related to the questions above lend support to a recent account of agrammatic morphosyntactic production: Interpretable Features' Impairment Hypothesis (Fyndanis, Varlokosta, & Tsapkini, 2012).
Method
A sentence completion task testing production of subject–verb agreement, tense/time reference, and aspect in local and nonlocal conditions and two verbal WM tasks were administered to 8 Greek-speaking persons with agrammatic aphasia (PWA) and 103 healthy participants.
Results
The 3 morphosyntactic categories dissociated in both groups (agreement > tense > aspect). A significant interaction emerged in both groups between the 3 morphosyntactic categories and WM. There was no main effect of locality in either of the 2 groups. At the individual level, all 8 PWA exhibited dissociations between agreement, tense, and aspect, and effects of locality were contradictory.
Conclusions
Results suggest that individuals with WM limitations (both PWA and healthy older speakers) show dissociations between the production of verb-related morphosyntactic categories. WM affects performance shaping the pattern of morphosyntactic production (in Greek: subject–verb agreement > tense > aspect). The absence of an effect of locality suggests that executive capacities tapped by WM tasks are involved in morphosyntactic processing of demanding categories even when the cue is adjacent to the target. Results are consistent with the Interpretable Features' Impairment Hypothesis (Fyndanis et al., 2012).
Supplemental Material
https://doi.org/10.23641/asha.6024428

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EQoPqI
via IFTTT

Population Health in Pediatric Speech and Language Disorders: Available Data Sources and a Research Agenda for the Field

Purpose
The aim of the study was to provide an overview of population science as applied to speech and language disorders, illustrate data sources, and advance a research agenda on the epidemiology of these conditions.
Method
Computer-aided database searches were performed to identify key national surveys and other sources of data necessary to establish the incidence, prevalence, and course and outcome of speech and language disorders. This article also summarizes a research agenda that could enhance our understanding of the epidemiology of these disorders.
Results
Although the data yielded estimates of prevalence and incidence for speech and language disorders, existing sources of data are inadequate to establish reliable rates of incidence, prevalence, and outcomes for speech and language disorders at the population level.
Conclusions
Greater support for inclusion of speech and language disorder–relevant questions is necessary in national health surveys to build the population science in the field.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2r6p3Vx
via IFTTT

Prosodic Boundary Effects on Syntactic Disambiguation in Children With Cochlear Implants

Purpose
This study investigated prosodic boundary effects on the comprehension of attachment ambiguities in children with cochlear implants (CIs) and normal hearing (NH) and tested the absolute boundary hypothesis and the relative boundary hypothesis. Processing speed was also investigated.
Method
Fifteen children with NH and 13 children with CIs (ages 8–12 years) who are monolingual speakers of Brazilian Portuguese participated in a computerized comprehension task with sentences containing prepositional phrase attachment ambiguity and manipulations of prosodic boundaries.
Results
Children with NH and children with CIs differed in how they used prosodic forms to disambiguate sentences. Children in both groups provided responses consistent with half of the predictions of the relative boundary hypothesis. The absolute boundary hypothesis did not characterize the syntactic disambiguation of children with CIs. Processing speed was similar in both groups.
Conclusions
Children with CIs do not use prosodic information to disambiguate sentences or to facilitate comprehension of unambiguous sentences similarly to children with NH. The results suggest that cross-linguistic differences may interact with syntactic disambiguation. Prosodic contrasts that affect sentence comprehension need to be addressed directly in intervention with children with CIs.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FFguGM
via IFTTT

Erratum



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HGM7EP
via IFTTT

Nonword Repetition and Language Outcomes in Young Children Born Preterm

Purpose
The aims of this study were to examine phonological short-term memory in children born preterm (PT) and to explore relations between this neuropsychological process and later language skills.
Method
Children born PT (n = 74) and full term (FT; n = 60) participated in a nonword repetition (NWR) task at 36 months old. Standardized measures of language skills were administered at 36 and 54 months old. Group differences in NWR task completion and NWR scores were analyzed. Hierarchical multiple regression analyses examined the extent to which NWR ability predicted later performance on language measures.
Results
More children born PT than FT did not complete the NWR task. Among children who completed the task, the performance of children born PT and FT was not statistically different. NWR scores at 36 months old accounted for significant unique variance in language scores at 54 months old in both groups. Birth group did not moderate the relation between NWR and later language performance.
Conclusions
These findings suggest that phonological short-term memory is an important skill underlying language development in both children born PT and FT. These findings have relevance to clinical practice in assessing children born PT.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2FFh6w4
via IFTTT

A Systematic Review of Semantic Feature Analysis Therapy Studies for Aphasia

Purpose
The purpose of this study was to review treatment studies of semantic feature analysis (SFA) for persons with aphasia. The review documents how SFA is used, appraises the quality of the included studies, and evaluates the efficacy of SFA.
Method
The following electronic databases were systematically searched (last search February 2017): Academic Search Complete, CINAHL Plus, E-journals, Health Policy Reference Centre, MEDLINE, PsycARTICLES, PsycINFO, and SocINDEX. The quality of the included studies was rated. Clinical efficacy was determined by calculating effect sizes (Cohen's d) or percent of nonoverlapping data when d could not be calculated.
Results
Twenty-one studies were reviewed reporting on 55 persons with aphasia. SFA was used in 6 different types of studies: confrontation naming of nouns, confrontation naming of verbs, connected speech/discourse, group, multilingual, and studies where SFA was compared with other approaches. The quality of included studies was high (Single Case Experimental Design Scale average [range] = 9.55 [8.0–11]). Naming of trained items improved for 45 participants (81.82%). Effect sizes indicated that there was a small treatment effect.
Conclusions
SFA leads to positive outcomes despite the variability of treatment procedures, dosage, duration, and variations to the traditional SFA protocol. Further research is warranted to examine the efficacy of SFA and generalization effects in larger controlled studies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjcNw
via IFTTT

Suprasegmental Features Are Not Acquired Early: Perception and Production of Monosyllabic Cantonese Lexical Tones in 4- to 6-Year-Old Preschool Children

Purpose
Previous studies reported that children acquire Cantonese tones before 3 years of age, supporting the assumption in models of phonological development that suprasegmental features are acquired rapidly and early in children. Yet, recent research found a large disparity in the age of Cantonese tone acquisition. This study investigated Cantonese tone development in 4- to 6-year-old children.
Method
Forty-eight 4- to 6-year-old Cantonese-speaking children and 28 mothers of the children labeled 30 pictures representing familiar words in the 6 tones in a picture-naming task and identified pictures representing words in different Cantonese tones in a picture-pointing task. To control for lexical biases in tone assessment, tone productions were low-pass filtered to eliminate lexical information. Five judges categorized the tones in filtered stimuli. Tone production accuracy, tone perception accuracy, and correlation between tone production and perception accuracy were examined.
Results
Children did not start to produce adultlike tones until 5 and 6 years of age. Four-year-olds produced none of the tones with adultlike accuracy. Five- and 6-year-olds attained adultlike productions in 2 (T5 and T6) to 3 (T4, T5, and T6) tones, respectively. Children made better progress in tone perception and achieved higher accuracy in perception than in production. However, children in all age groups perceived none of the tones as accurately as adults, except that T1 was perceived with adultlike accuracy by 6-year-olds. Only weak association was found between children's tone perception and production accuracy.
Conclusions
Contradicting to the long-held assumption that children acquire lexical tone rapidly and early before the mastery of segmentals, this study found that 4- to 6-year-old children have not mastered the perception or production of the full set of Cantonese tones in familiar monosyllabic words. Larger development was found in children's tone perception than tone production. The higher tone perception accuracy but weak correlation between tone perception and production abilities in children suggested that tone perception accuracy is not sufficient for children's tone production accuracy. The findings have clinical and theoretical implications.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HwInqd
via IFTTT

Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment

Purpose
This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech.
Method
The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI).
Results
In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH.
Conclusions
Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so.
Supplemental Material
https://doi.org/10.23641/asha.6118817

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjdRA
via IFTTT

Auditory–Perceptual Assessment of Fluency in Typical and Neurologically Disordered Speech

Purpose
The aim of this study is to investigate how speech fluency in typical and atypical speech is perceptually assessed by speech-language pathologists (SLPs). Our research questions were as follows: (a) How do SLPs rate fluency in speakers with and without neurological communication disorders? (b) Do they differentiate the speaker groups? and (c) What features do they hear impairing speech fluency?
Method
Ten SLPs specialized in neurological communication disorders volunteered as expert judges to rate 90 narrative speech samples on a Visual Analogue Scale (see Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009; p. 127). The samples—randomly mixed—were from 70 neurologically healthy speakers (the control group) and 20 speakers with traumatic brain injury, 10 of whom had neurogenic stuttering (designated as Clinical Groups A and B).
Results
The fluency rates were higher for typical speakers than for speakers with traumatic brain injury; however, the agreement among the judges was higher for atypical fluency. Auditory–perceptual assessment of fluency was significantly impaired by the features of stuttering and something else but not by speech rate. Stuttering was also perceived in speakers not diagnosed as stutterers. A borderline between typical and atypical fluency was found.
Conclusions
Speech fluency is a multifaceted phenomenon, and on the basis of this study, we suggest a more general approach to fluency and its deviations that will take into account, in addition to the motor and linguistic aspects of fluency, the metalinguistic component of expression as well. The results of this study indicate a need for further studies on the precise nature of borderline fluency and its different disfluencies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HKhjAz
via IFTTT

Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory

Purpose
We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations.
Method
Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford–Kowal–Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM.
Results
Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ loaded as a 4th factor.
Conclusions
Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFhdru
via IFTTT

Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

Purpose
The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces.
Method
Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory–acoustic vowel space and the vowel space hull area were also calculated.
Results
Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants.
Conclusions
These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXtO98
via IFTTT

The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon)

Purpose
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon.
Method
A total of 460 participants aged 3–5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist.
Results
Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%.
Conclusion
Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXRjyE
via IFTTT

Kinematic Features of Jaw and Lips Distinguish Symptomatic From Presymptomatic Stages of Bulbar Decline in Amyotrophic Lateral Sclerosis

Purpose
The goals of this study were to (a) classify speech movements of patients with amyotrophic lateral sclerosis (ALS) in presymptomatic and symptomatic phases of bulbar function decline relying solely on kinematic features of lips and jaw and (b) identify the most important measures that detect the transition between early and late bulbar changes.
Method
One hundred ninety-two recordings obtained from 64 patients with ALS were considered for the analysis. Feature selection and classification algorithms were used to analyze lip and jaw movements recorded with Optotrak Certus (Northern Digital Inc.) during a sentence task. A feature set, which included 35 measures of movement range, velocity, acceleration, jerk, and area measures of lips and jaw, was used to classify sessions according to the speaking rate into presymptomatic (> 160 words per minute) and symptomatic (< 160 words per minute) groups.
Results
Presymptomatic and symptomatic phases of bulbar decline were distinguished with high accuracy (87%), relying only on lip and jaw movements. The best features that allowed detecting the differences between early and later bulbar stages included cumulative path of lower lip and jaw, peak values of velocity, acceleration, and jerk of lower lip and jaw.
Conclusion
The results established a relationship between facial kinematics and bulbar function decline in ALS. Considering that facial movements can be recorded by means of novel inexpensive and easy-to-use, video-based methods, this work supports the development of an automatic system for facial movement analysis to help clinicians in tracking the disease progression in ALS.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rt9o3N
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2GXZJfi
via IFTTT

Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

Purpose
The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression.
Study Design
Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos.
Method
A diffusive behavior detection–based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions.
Results
Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study.
Conclusion
The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

from #Audiology via ola Kala on Inoreader https://ift.tt/2js0PC4
via IFTTT

Weighting of Amplitude and Formant Rise Time Cues by School-Aged Children: A Mismatch Negativity Study

Purpose
An important skill in the development of speech perception is to apply optimal weights to acoustic cues so that phonemic information is recovered from speech with minimum effort. Here, we investigated the development of acoustic cue weighting of amplitude rise time (ART) and formant rise time (FRT) cues in children as measured by mismatch negativity (MMN).
Method
Twelve adults and 36 children aged 6–12 years listened to a /ba/–/wa/ contrast in an oddball paradigm in which the standard stimulus had the ART and FRT cues of /ba/. In different blocks, the deviant stimulus had either the ART or FRT cues of /wa/.
Results
The results revealed that children younger than 10 years were sensitive to both ART and FRT cues whereas 10- to 12-year-old children and adults were sensitive only to FRT cues. Moreover, children younger than 10 years generated a positive mismatch response, whereas older children and adults generated MMN.
Conclusion
These results suggest that preattentive adultlike weighting of ART and FRT cues is attained only by 10 years of age and accompanies the change from mismatch response to the more mature MMN response.
Supplemental Material
https://doi.org/10.23641/asha.6207608

from #Audiology via ola Kala on Inoreader https://ift.tt/2I6GI7b
via IFTTT

Effects of a Tablet-Based Home Practice Program With Telepractice on Treatment Outcomes in Chronic Aphasia

Purpose
The aim of this study was to determine if a tablet-based home practice program with weekly telepractice support could enable long-term maintenance of recent treatment gains and foster new language gains in poststroke aphasia.
Method
In a pre–post group study of home practice outcomes, 21 individuals with chronic aphasia were examined before and after a 6-month home practice phase and again at follow-up 4 months later. The main outcome measure studied was change in naming previously treated or untreated, practiced or unpracticed pictures of objects and actions. Individualized home practice programs were created in iBooks Author with semantic, phonemic, and orthographic cueing in pictures, words, and videos in order to facilitate naming of previously treated or untreated pictures.
Results
Home practice was effective for all participants with severity moderating treatment effects, such that individuals with the most severe aphasia made and maintained fewer gains. There was a negative relationship between the amount of training required for iPad proficiency and improvements on practiced and unpracticed pictures and a positive relationship between practice compliance and same improvements.
Conclusion
Unsupervised home practice with weekly video teleconferencing support is effective. This study demonstrates that even individuals with chronic severe aphasia, including those with no prior smart device or even computer experience, can attain independent proficiency to continue practicing and improving their language skills beyond therapy discharge. This could represent a low-cost therapy option for individuals without insurance coverage and/or those for whom mobility is an obstacle to obtaining traditional aphasia therapy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2vFLNBk
via IFTTT

Neighborhood Density and Syntactic Class Effects on Spoken Word Recognition: Specific Language Impairment and Typical Development

Purpose
The purpose of the current study was to determine the effect of neighborhood density and syntactic class on word recognition in children with specific language impairment (SLI) and typical development (TD).
Method
Fifteen children with SLI (M age = 6;5 [years;months]) and 15 with TD (M age = 6;4) completed a forward gating task that presented consonant–vowel–consonant dense and sparse (neighborhood density) nouns and verbs (syntactic class).
Results
On all dependent variables, the SLI group performed like the TD group. Recognition performance was highest for dense words and nouns. The majority of 1st nontarget responses shared the 1st phoneme with the target (i.e., was in the target's cohort). When considering the ranking of word types from easiest to most difficult, children showed equivalent recognition performance for dense verbs and sparse nouns, which were both easier to recognize than sparse verbs but more difficult than dense nouns.
Conclusion
The current study yields new insight into how children access lexical–phonological information and syntactic class during the process of spoken word recognition. Given the identical pattern of results for the SLI and TD groups, we hypothesize that accessing lexical–phonological information may be a strength for children with SLI. We also discuss implications for using the forward gating paradigm as a measure of word recognition.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KAlShR
via IFTTT

Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs

Purpose
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.
Method
Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.
Results
The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome.
Conclusion
The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.
Supplemental Material
https://doi.org/10.23641/asha.6066545

from #Audiology via ola Kala on Inoreader https://ift.tt/2vt9GMw
via IFTTT

Does Implicit Voice Learning Improve Spoken Language Processing? Implications for Clinical Practice

Purpose
In typical interactions with other speakers, including a clinical environment, listeners become familiar with voices through implicit learning. Previous studies have found evidence for a Familiar Talker Advantage (better speech perception and spoken language processing for familiar voices) following explicit voice learning. The current study examined whether a Familiar Talker Advantage would result from implicit voice learning.
Method
Thirty-three adults and 16 second graders were familiarized with 1 of 2 talkers' voices over 2 days through live interactions as 1 of 2 experimenters administered standardized tests and interacted with the listeners. To assess whether this implicit voice learning would generate a Familiar Talker Advantage, listeners completed a baseline sentence recognition task and a post-learning sentence recognition task with both the familiar talker and the unfamiliar talker.
Results
No significant effect of voice familiarity was found for either the children or the adults following implicit voice learning. Effect size estimates suggest that familiarity with the voice may benefit some listeners, despite the lack of an overall effect of familiarity.
Discussion
We discuss possible clinical implications of this finding and directions for future research.

from #Audiology via ola Kala on Inoreader https://ift.tt/2jpONJ6
via IFTTT

Morphosyntactic Production and Verbal Working Memory: Evidence From Greek Aphasia and Healthy Aging

Purpose
The present work investigated whether verbal working memory (WM) affects morphosyntactic production in configurations that do not involve or favor similarity-based interference and whether WM interacts with verb-related morphosyntactic categories and/or cue–target distance (locality). It also explored whether the findings related to the questions above lend support to a recent account of agrammatic morphosyntactic production: Interpretable Features' Impairment Hypothesis (Fyndanis, Varlokosta, & Tsapkini, 2012).
Method
A sentence completion task testing production of subject–verb agreement, tense/time reference, and aspect in local and nonlocal conditions and two verbal WM tasks were administered to 8 Greek-speaking persons with agrammatic aphasia (PWA) and 103 healthy participants.
Results
The 3 morphosyntactic categories dissociated in both groups (agreement > tense > aspect). A significant interaction emerged in both groups between the 3 morphosyntactic categories and WM. There was no main effect of locality in either of the 2 groups. At the individual level, all 8 PWA exhibited dissociations between agreement, tense, and aspect, and effects of locality were contradictory.
Conclusions
Results suggest that individuals with WM limitations (both PWA and healthy older speakers) show dissociations between the production of verb-related morphosyntactic categories. WM affects performance shaping the pattern of morphosyntactic production (in Greek: subject–verb agreement > tense > aspect). The absence of an effect of locality suggests that executive capacities tapped by WM tasks are involved in morphosyntactic processing of demanding categories even when the cue is adjacent to the target. Results are consistent with the Interpretable Features' Impairment Hypothesis (Fyndanis et al., 2012).
Supplemental Material
https://doi.org/10.23641/asha.6024428

from #Audiology via ola Kala on Inoreader https://ift.tt/2EQoPqI
via IFTTT

Population Health in Pediatric Speech and Language Disorders: Available Data Sources and a Research Agenda for the Field

Purpose
The aim of the study was to provide an overview of population science as applied to speech and language disorders, illustrate data sources, and advance a research agenda on the epidemiology of these conditions.
Method
Computer-aided database searches were performed to identify key national surveys and other sources of data necessary to establish the incidence, prevalence, and course and outcome of speech and language disorders. This article also summarizes a research agenda that could enhance our understanding of the epidemiology of these disorders.
Results
Although the data yielded estimates of prevalence and incidence for speech and language disorders, existing sources of data are inadequate to establish reliable rates of incidence, prevalence, and outcomes for speech and language disorders at the population level.
Conclusions
Greater support for inclusion of speech and language disorder–relevant questions is necessary in national health surveys to build the population science in the field.

from #Audiology via ola Kala on Inoreader https://ift.tt/2r6p3Vx
via IFTTT

Prosodic Boundary Effects on Syntactic Disambiguation in Children With Cochlear Implants

Purpose
This study investigated prosodic boundary effects on the comprehension of attachment ambiguities in children with cochlear implants (CIs) and normal hearing (NH) and tested the absolute boundary hypothesis and the relative boundary hypothesis. Processing speed was also investigated.
Method
Fifteen children with NH and 13 children with CIs (ages 8–12 years) who are monolingual speakers of Brazilian Portuguese participated in a computerized comprehension task with sentences containing prepositional phrase attachment ambiguity and manipulations of prosodic boundaries.
Results
Children with NH and children with CIs differed in how they used prosodic forms to disambiguate sentences. Children in both groups provided responses consistent with half of the predictions of the relative boundary hypothesis. The absolute boundary hypothesis did not characterize the syntactic disambiguation of children with CIs. Processing speed was similar in both groups.
Conclusions
Children with CIs do not use prosodic information to disambiguate sentences or to facilitate comprehension of unambiguous sentences similarly to children with NH. The results suggest that cross-linguistic differences may interact with syntactic disambiguation. Prosodic contrasts that affect sentence comprehension need to be addressed directly in intervention with children with CIs.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFguGM
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2HGM7EP
via IFTTT

Nonword Repetition and Language Outcomes in Young Children Born Preterm

Purpose
The aims of this study were to examine phonological short-term memory in children born preterm (PT) and to explore relations between this neuropsychological process and later language skills.
Method
Children born PT (n = 74) and full term (FT; n = 60) participated in a nonword repetition (NWR) task at 36 months old. Standardized measures of language skills were administered at 36 and 54 months old. Group differences in NWR task completion and NWR scores were analyzed. Hierarchical multiple regression analyses examined the extent to which NWR ability predicted later performance on language measures.
Results
More children born PT than FT did not complete the NWR task. Among children who completed the task, the performance of children born PT and FT was not statistically different. NWR scores at 36 months old accounted for significant unique variance in language scores at 54 months old in both groups. Birth group did not moderate the relation between NWR and later language performance.
Conclusions
These findings suggest that phonological short-term memory is an important skill underlying language development in both children born PT and FT. These findings have relevance to clinical practice in assessing children born PT.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FFh6w4
via IFTTT

Intensity Discrimination and Speech Recognition of Cochlear Implant Users

Abstract

The relation between speech recognition and within-channel or across-channel (i.e., spectral tilt) intensity discrimination was measured in nine CI users (11 ears). Within-channel intensity difference limens (IDLs) were measured at four electrode locations across the electrode array. Spectral tilt difference limens were measured with (XIDL-J) and without (XIDL) level jitter. Only three subjects could perform the XIDL-J task with the amount of jitter required to limit use of within-channel cues. XIDLs (normalized to %DR) were correlated with speech recognition (r = 0.67, P = 0.019) and were highly correlated with IDLs. XIDLs were on average nearly 3 times larger than IDLs and did not vary consistently with the spatial separation of the two component electrodes. The overall pattern of results was consistent with a common underlying subject-dependent limitation in the two difference limen tasks, hypothesized to be perceptual variance (how the perception of a sound differs on different presentations), which may also underlie the correlation of XIDLs with speech recognition. Evidence that spectral tilt discrimination is more important for speech recognition than within-channel intensity discrimination was not unequivocally shown in this study. However, the results tended to support this proposition, with XIDLs more correlated with speech performance than IDLs, and the ratio XIDL/IDL also being correlated with speech recognition. If supported by further research, the importance of perceptual variance as a limiting factor in speech understanding for CI users has important implications for efforts to improve outcomes for those with poor speech recognition.



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2wVnHTO
via IFTTT

Laurel or Yanni?

Several years ago, the Internet went crazy over the color of a dress due to a visual illusion.  This week, an auditory illusion had many people asking the simple question of whether they heard 



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2wO6njA
via IFTTT

Community noise exposure and annoyance, activity interference, and academic achievement among university students

arrow_top.gif

Rattapon Onchang, Darryl W Hawker

Noise and Health 2018 20(94):69-76

Background: Noise annoyance and effects on academic performance have been investigated for primary and secondary school students but comparatively little work has been conducted with university students who generally spend more time in dormitories or accommodation for their self-study. Objective: To determine, using a socio-acoustic approach involving face-to-face interviews and actual noise measurements, the effect of various community noise sources on student activities in accommodation both inside and outside a university precinct and also relationships with cumulative grade point average (GPA). Materials and Methods: The study sample comprised a student group resident off-campus (n = 450) and a control group resident in dormitories on-campus (n = 336). Noise levels [LA (dB)] were measured at both locations according to International Organization for Standardization standards. The extent of community noise interference with the student activities was examined with bivariate and stratified analyses and results presented as Mantel–Haenszel weighted odds ratios (ORMH) with 95% confidence intervals. Binary logistic regression was employed to assess the association between noise-disturbed student activities and dichotomized GPA values and derive odds ratios (ORs) for these associations. Results: Measured noise levels were all significantly (P < 0.05) higher for off-campus students. This was not reflected in the interviewed students’ subjective perceptions of how “noisy” their respective environments were. The off-campus student cohort was, however, more annoyed by all community noise categories (P < 0.001) except road traffic noise. For impact on specific student activities, the largest differences between on- and off-campus students were found for telephone and personal communication regardless of the type of community noise. There was no significant difference in the relationships between perceived annoyance due to community noise categories and cumulative GPA in the off-campus group compared to those for on-campus residents with ORMH values ranging from 1.049 to 1.164. The most important noise-impacted factors affecting off-campus students’ cumulative GPA were reading and mental tasks (OR = 2.801). Rest disturbance had a positive influence on cumulative GPA for on-campus students. Conclusion: These results provide support that various contemporary community noise sources affect university students’ activities and possibly influence their educational achievement as well.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2rSoyzL
via IFTTT

Utility of otoacoustic emissions and olivocochlear reflex in predicting vulnerability to noise-induced inner ear damage

arrow_top.gif

Sarantis Blioskas, Miltiadis Tsalighopoulos, George Psillas, Konstantinos Markou

Noise and Health 2018 20(94):101-111

Aim: The aim of the present study was to explore the possible utility of otoacoustic emissions (OAEs) and efferent system strength to determine vulnerability to noise exposure in a clinical setting. Materials and Methods: The study group comprised 344 volunteers who had just begun mandatory basic training as Hellenic Corps Officers Military Academy cadets. Pure-tone audiograms were obtained on both ears. Participants were also subjected to diagnostic transient-evoked otoacoustic emissions (TEOAEs). Finally, they were all tested for efferent function through the suppression of TEOAEs with contralateral noise. Following baseline evaluation, all cadets fired 10 rounds using a 7.62 mm Heckler & Koch G3A3 assault rifle while lying down in prone position. Immediately after exposure to gunfire noise and no later than 10 h, all participants completed an identical protocol for a second time, which was then repeated a third time, 30 days later. Results: The data showed that after the firing drill, 280 participants suffered a temporary threshold shift (TTS) (468 ears), while in the third evaluation conducted 30 days after exposure, 142 of these ears still presented a threshold shift compared to the baseline evaluation [permanent threshold shift (PTS) ears]. A receiver operating characteristics curve analysis showed that OAEs amplitude is predictive of future TTS and PTS. The results were slightly different for the suppression of OAEs showing only a slight trend toward significance. The curves were used to determine cut points to evaluate the likelihood of TTS/PTS for OAEs amplitude in the baseline evaluation. Decision limits yielding 71.6% sensitivity were 12.45 dB SPL with 63.8% specificity for PTS, and 50% sensitivity were 12.35 dB SPL with 68.2% specificity for TTS. Conclusions: Interestingly, the above data yielded tentative evidence to suggest that OAEs amplitude is both sensitive and specific enough to efficiently identify participants who are particularly susceptible to hearing loss caused by impulse noise generated by firearms. Hearing conservation programs may therefore want to consider including such tests in their routine. As far as efferent strength is concerned, we feel that further research is due, before implementing the suppression of OAEs in hearing conservations programs in a similar manner.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2IwI775
via IFTTT