Δευτέρα 18 Σεπτεμβρίου 2017

Sinus Tone Generator

Use our free sinus tone generator here!

 

Tinnitus is a ringing or similar sound that can last for hours or days at a time. Many people hear this noise because of a mistake made by that person’s brain. However, sound notching may make it possible to reduce how loud the ringing noise may be. This is done with the help of a sinus tone generator.

What Is a Sinus Tone Generator?

A sinus tone generator will help a person determine their tinnitus volume and the frequency at which the sound is being generated. Once the frequency has been determined, a notch can be created, which will lower that volume at that frequency. An individual will then choose a prerecorded notched sound to listen to each night for several weeks or months.

What If I Have Tinnitus at Multiple Tones?

It may be possible to create sound recordings that have more than one notch in them. However, the product does work best to treat one frequency at a time. Those who do have problems at multiple frequencies are invited to create multiple files in an effort to seek as much relief from tinnitus symptoms as possible.

When Should You Listen to Notched Recordings?

You can listen to your notched recordings whenever you want. If you work from home, you may be able to do your listening while writing a report or sending an email. It may also be possible to get your listening time in while on your way to work or before bed. The only rule is that you listen to your recording daily. Doing so will train your brain to correct the mistake that leads to the constant ringing noise when there is no sound present.

What Types of Sounds Can I Listen To?

You can listen to anything that you want. AudioNotch provides several sounds of their own that you can use, but you may also use your own music or preferred white noise. For some, this may mean listening to a book on tape each night or listening to their favorite musician or motivational speaker before they go to bed.

The use of notched sound may make it easier to overcome tinnitus symptoms and start to experience a better quality of life in a short period of time. Using the sinus tone generator from AudioNotch may be an effective tool that can be used by itself or as part of an overall treatment plan.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2wrh236
via IFTTT

Effects of Lexical Variables on Silent Reading Comprehension in Individuals With Aphasia: Evidence From Eye Tracking

Purpose
Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length.
Methods
IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements.
Results
IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls.
Conclusions
IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2589/2653404/Effects-of-Lexical-Variables-on-Silent-Reading
via IFTTT

Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss

Purpose
The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids.
Method
Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances.
Results
Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance.
Conclusion
Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2394/2648888/Automatic-Speech-Recognition-Predicts-Speech
via IFTTT

Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids

Purpose
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels—in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands—in listeners with hearing impairment using hearing aids.
Method
The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity.
Results
Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation.
Conclusion
Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2687/2635215/Visual-Cues-Contribute-Differentially-to
via IFTTT

Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

Purpose
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition.
Method
Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004).
Results
The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech.
Conclusions
As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile.
Supplemental Materials
http://ift.tt/2xiwlv4

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2406/2653957/Inner-Speechs-Relationship-With-Overt-Speech-in
via IFTTT

Training Peer Partners to Use a Speech-Generating Device With Classmates With Autism Spectrum Disorder: Exploring Communication Outcomes Across Preschool Contexts

Purpose
This study examined effects of a peer-mediated intervention that provided training on the use of a speech-generating device for preschoolers with severe autism spectrum disorder (ASD) and peer partners.
Method
Effects were examined using a multiple probe design across 3 children with ASD and limited to no verbal skills. Three peers without disabilities were taught to Stay, Play, and Talk using a GoTalk 4+ (Attainment Company) and were then paired up with a classmate with ASD in classroom social activities. Measures included rates of communication acts, communication mode and function, reciprocity, and engagement with peers.
Results
Following peer training, intervention effects were replicated across 3 peers, who all demonstrated an increased level and upward trend in communication acts to their classmates with ASD. Outcomes also revealed moderate intervention effects and increased levels of peer-directed communication for 3 children with ASD in classroom centers. Additional analyses revealed higher rates of communication in the added context of preferred toys and snack. The children with ASD also demonstrated improved communication reciprocity and peer engagement.
Conclusions
Results provide preliminary evidence on the benefits of combining peer-mediated and speech-generating device interventions to improve children's communication. Furthermore, it appears that preferred contexts are likely to facilitate greater communication and social engagement with peers.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2648/2653179/Training-Peer-Partners-to-Use-a-SpeechGenerating
via IFTTT

Indicators of Dysphagia in Aged Care Facilities

Purpose
The current cross-sectional study aimed to investigate risk factors for dysphagia in elderly individuals in aged care facilities.
Method
A total of 878 individuals from 42 aged care facilities were recruited for this study. The dependent outcome was speech therapist-determined swallowing function. Independent factors were Eating Assessment Tool score, oral motor assessment score, Mini-Mental State Examination, medical history, and various functional status ratings. Binomial logistic regression was used to identify independent variables associated with dysphagia in this cohort.
Results
Two statistical models were constructed. Model 1 used variables from case files without the need for hands-on assessment, and Model 2 used variables that could be obtained from hands-on assessment. Variables positively associated with dysphagia identified in Model 1 were male gender, total dependence for activities of daily living, need for feeding assistance, mobility, requiring assistance walking or using a wheelchair, and history of pneumonia. Variables positively associated with dysphagia identified in Model 2 were Mini-Mental State Examination score, edentulousness, and oral motor assessments score.
Conclusions
Cognitive function, dentition, and oral motor function are significant indicators associated with the presence of swallowing in the elderly. When assessing the frail elderly, case file information can help clinicians identify frail elderly individuals who may be suffering from dysphagia.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2416/2649235/Indicators-of-Dysphagia-in-Aged-Care-Facilities
via IFTTT

Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users

Purpose
To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users.
Method
Seventeen bimodal CI users (28–74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age.
Results
The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory.
Conclusions
Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2752/2653958/Speech-Recognition-and-Cognitive-Skills-in-Bimodal
via IFTTT

Alveolar and Postalveolar Voiceless Fricative and Affricate Productions of Spanish–English Bilingual Children With Cochlear Implants

Purpose
This study investigates the production of voiceless alveolar and postalveolar fricatives and affricates by bilingual and monolingual children with hearing loss who use cochlear implants (CIs) and their peers with normal hearing (NH).
Method
Fifty-four children participated in our study, including 12 Spanish–English bilingual CI users (M = 6;0 [years;months]), 12 monolingual English-speaking children with CIs (M = 6;1), 20 bilingual children with NH (M = 6;5), and 10 monolingual English-speaking children with NH (M = 5;10). Picture elicitation targeting /s/, /tʃ/, and /ʃ/ was administered. Repeated-measures analyses of variance comparing group means for frication duration, rise time, and centroid frequency were conducted for the effects of CI use and bilingualism.
Results
All groups distinguished the target sounds in the 3 acoustic parameters examined. Regarding frication duration and rise time, the Spanish productions of bilingual children with CIs differed from their bilingual peers with NH. English frication duration patterns for bilingual versus monolingual CI users also differed. Centroid frequency was a stronger place cue for children with NH than for children with CIs.
Conclusion
Patterns of fricative and affricate production display effects of bilingualism and diminished signal, yielding unique patterns for bilingual and monolingual CI users.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2427/2648980/Alveolar-and-Postalveolar-Voiceless-Fricative-and
via IFTTT

Input Subject Diversity Accelerates the Growth of Tense and Agreement: Indirect Benefits From a Parent-Implemented Intervention

Purpose
This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech.
Method
Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth.
Results
Instruction increased parent use of full is declaratives (ηp2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity.
Conclusions
These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2619/2654122/Input-Subject-Diversity-Accelerates-the-Growth-of
via IFTTT

Swallowing Mechanics Associated With Artificial Airways, Bolus Properties, and Penetration–Aspiration Status in Trauma Patients

Purpose
Artificial airway procedures such as intubation and tracheotomy are common in the treatment of traumatic injuries, and bolus modifications may be implemented to help manage swallowing disorders. This study assessed artificial airway status, bolus properties (volume and viscosity), and the occurrence of laryngeal penetration and/or aspiration in relation to mechanical features of swallowing.
Method
Coordinates of anatomical landmarks were extracted at minimum and maximum hyolaryngeal excursion from 228 videofluoroscopic swallowing studies representing 69 traumatically injured U.S. military service members with dysphagia. Morphometric canonical variate and regression analyses examined associations between swallowing mechanics and bolus properties based on artificial airway and penetration–aspiration status.
Results
Significant differences in swallowing mechanics were detected between extubated versus tracheotomized (D = 1.32, p < .0001), extubated versus decannulated (D = 1.74, p < .0001), and decannulated versus tracheotomized (D = 1.24, p < .0001) groups per post hoc discriminant function analysis. Tracheotomy-in-situ and decannulated subgroups exhibited increased head/neck extension and posterior relocation of the larynx. Swallowing mechanics associated with (a) penetration–aspiration status and (b) bolus properties were moderately related for extubated and decannulated subgroups, but not the tracheotomized subgroup, per morphometric regression analysis.
Conclusion
Specific differences in swallowing mechanics associated with artificial airway status and certain bolus properties may guide therapeutic intervention in trauma-based dysphagia.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2442/2649304/Swallowing-Mechanics-Associated-With-Artificial
via IFTTT

Applying Item Response Theory to the Development of a Screening Adaptation of the Goldman-Fristoe Test of Articulation–Second Edition

Purpose
Item response theory (IRT) is a psychometric approach to measurement that uses latent trait abilities (e.g., speech sound production skills) to model performance on individual items that vary by difficulty and discrimination. An IRT analysis was applied to preschoolers' productions of the words on the Goldman-Fristoe Test of Articulation–Second Edition (GFTA-2) to identify candidates for a screening measure of speech sound production skills.
Method
The phoneme accuracies from 154 preschoolers, with speech skills on the GFTA-2 ranging from the 1st to above the 90th percentile, were analyzed with a 2-parameter logistic model.
Results
A total of 108 of the 232 phonemes from stimuli in the sounds-in-words subtest fit the IRT model. These phonemes, and subgroups of the most difficult of these phonemes, correlated significantly with the children's overall percentile scores on the GFTA-2. Regression equations calculated for the 5 and 10 most difficult phonemes predicted overall percentile score at levels commensurate with other screening measures.
Conclusions
These results suggest that speech production accuracy can be screened effectively with a small number of sounds. They motivate further research toward the development of a screening measure of children's speech sound production skills whose stimuli consist of a limited number of difficult phonemes.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2672/2653405/Applying-Item-Response-Theory-to-the-Development
via IFTTT

Modeling the Pathophysiology of Phonotraumatic Vocal Hyperfunction With a Triangular Glottal Model of the Vocal Folds

Purpose
Our goal was to test prevailing assumptions about the underlying biomechanical and aeroacoustic mechanisms associated with phonotraumatic lesions of the vocal folds using a numerical lumped-element model of voice production.
Method
A numerical model with a triangular glottis, posterior glottal opening, and arytenoid posturing is proposed. Normal voice is altered by introducing various prephonatory configurations. Potential compensatory mechanisms (increased subglottal pressure, muscle activation, and supraglottal constriction) are adjusted to restore an acoustic target output through a control loop that mimics a simplified version of auditory feedback.
Results
The degree of incomplete glottal closure in both the membranous and posterior portions of the folds consistently leads to a reduction in sound pressure level, fundamental frequency, harmonic richness, and harmonics-to-noise ratio. The compensatory mechanisms lead to significantly increased vocal-fold collision forces, maximum flow-declination rate, and amplitude of unsteady flow, without significantly altering the acoustic output.
Conclusion
Modeling provided potentially important insights into the pathophysiology of phonotraumatic vocal hyperfunction by demonstrating that compensatory mechanisms can counteract deterioration in the voice acoustic signal due to incomplete glottal closure, but this also leads to high vocal-fold collision forces (reflected in aerodynamic measures), which significantly increases the risk of developing phonotrauma.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2452/2652562/Modeling-the-Pathophysiology-of-Phonotraumatic
via IFTTT

The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise

Purpose
This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise.
Method
Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise.
Results
The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise.
Conclusions
Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2725/2648979/The-Effect-of-Dynamic-Pitch-on-Speech-Recognition
via IFTTT

Trans Male Voice in the First Year of Testosterone Therapy: Make No Assumptions

Purpose
The purpose of this study was to prospectively examine changes in gender-related voice domain of pitch measured by fundamental frequency, function-related domains of vocal quality, range, and habitual pitch level and the self-perceptions of transmasculine people during their first year of testosterone treatment.
Method
Seven trans men received 2 voice assessments at baseline and 1 assessment at 3, 6, 9, and 12 months after starting treatment.
Results
Vocal quality measures varied between and within participants but were generally within normal limits throughout the year. Mean fundamental frequency (MF0) during reading decreased, although to variable extents and rates. Phonation frequency range shifted down the scale, although it increased in some participants and decreased in others. Considering MF0 and phonation frequency range together in a measure of habitual pitch level revealed that the majority of participants spoke using an MF0 that was low within their range compared with cisgender norms. Although the trans men generally self-reported voice masculinization, it was not correlated with MF0, frequency range, or habitual pitch level at any time point or with MF0 note change from baseline to 1 year of testosterone treatment, but correlations should be interpreted with caution due to the heterogeneous responses of the 7 participants.
Conclusion
In trans men, consideration of voice deepening in the context of objective and subjective measures of voice can reveal unique profiles and inform patient care.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2472/2654123/Trans-Male-Voice-in-the-First-Year-of-Testosterone
via IFTTT

Relevance of the Implementation of Teeth in Three-Dimensional Vocal Tract Models

Purpose
Recently, efforts have been made to investigate the vocal tract using magnetic resonance imaging (MRI). Due to technical limitations, teeth were omitted in many previous studies on vocal tract acoustics. However, the knowledge of how teeth influence vocal tract acoustics might be important in order to estimate the necessity of implementing teeth in vocal tract models. The aim of this study was therefore to estimate the effect of teeth on vocal tract acoustics.
Method
The acoustic properties of 18 solid (3-dimensional printed) vocal tract models without teeth were compared to the same 18 models including teeth in terms of resonance frequencies (f Rn). The f Rn were obtained from the transfer functions of these models excited by white noise at the glottis level. The models were derived from MRI data of 2 trained singers performing 3 different vowel conditions (/i/, /a/, and /u/) in speech and low-pitched and high-pitched singing.
Results
Depending on the oral configuration, models exhibiting side cavities or side branches were characterized by major changes in the transfer function when teeth were implemented via the introduction of pole-zero pairs.
Conclusions
To avoid errors in modeling, teeth should be included in 3-dimensional vocal tract models for acoustic evaluation.
Supplemental Material
http://ift.tt/2wnkzL9

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2379/2654188/Relevance-of-the-Implementation-of-Teeth-in
via IFTTT

How Stuttering Develops: The Multifactorial Dynamic Pathways Theory

Purpose
We advanced a multifactorial, dynamic account of the complex, nonlinear interactions of motor, linguistic, and emotional factors contributing to the development of stuttering. Our purpose here is to update our account as the multifactorial dynamic pathways theory.
Method
We review evidence related to how stuttering develops, including genetic/epigenetic factors; motor, linguistic, and emotional features; and advances in neuroimaging studies. We update evidence for our earlier claim: Although stuttering ultimately reflects impairment in speech sensorimotor processes, its course over the life span is strongly conditioned by linguistic and emotional factors.
Results
Our current account places primary emphasis on the dynamic developmental context in which stuttering emerges and follows its course during the preschool years. Rapid changes in many neurobehavioral systems are ongoing, and critical interactions among these systems likely play a major role in determining persistence of or recovery from stuttering.
Conclusion
Stuttering, or childhood onset fluency disorder (Diagnostic and Statistical Manual of Mental Disorders, 5th edition; American Psychiatric Association [APA], 2013), is a neurodevelopmental disorder that begins when neural networks supporting speech, language, and emotional functions are rapidly developing. The multifactorial dynamic pathways theory motivates experimental and clinical work to determine the specific factors that contribute to each child's pathway to the diagnosis of stuttering and those most likely to promote recovery.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2483/2652602/How-Stuttering-Develops-The-Multifactorial-Dynamic
via IFTTT

“Whatdunit?” Sentence Comprehension Abilities of Children With SLI: Sensitivity to Word Order in Canonical and Noncanonical Structures

Purpose
With Aim 1, we compared the comprehension of and sensitivity to canonical and noncanonical word order structures in school-age children with specific language impairment (SLI) and same-age typically developing (TD) children. Aim 2 centered on the developmental improvement of sentence comprehension in the groups. With Aim 3, we compared the comprehension error patterns of the groups.
Method
Using a “Whatdunit” agent selection task, 117 children with SLI and 117 TD children (ages 7:0–11:11, years:months) propensity matched on age, gender, mother's education, and family income pointed to the picture that best represented the agent in semantically implausible canonical structures (subject–verb–object, subject relative) and noncanonical structures (passive, object relative).
Results
The SLI group performed worse than the TD group across sentence types. TD children demonstrated developmental improvement across each sentence type, but children with SLI showed improvement only for canonical sentences. Both groups chose the object noun as agent significantly more often than the noun appearing in a prepositional phrase.
Conclusions
In the absence of semantic–pragmatic cues, comprehension of canonical and noncanonical sentences by children with SLI is limited, with noncanonical sentence comprehension being disproportionately limited. The children's ability to make proper semantic role assignments to the noun arguments in sentences, especially noncanonical, is significantly hindered.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2603/2652493/Whatdunit-Sentence-Comprehension-Abilities-of
via IFTTT

A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

Purpose
The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean).
Method
A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph.
Results
The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate.
Conclusions
The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2506/2650812/A-CrossLanguage-Study-of-Acoustic-Predictors-of
via IFTTT

Distributed Training Enhances Implicit Sequence Acquisition in Children With Specific Language Impairment

Purpose
This study explored the effects of 2 different training structures on the implicit acquisition of a sequence in a serial reaction time (SRT) task in children with and without specific language impairment (SLI).
Method
All of the children underwent 3 training sessions, followed by a retention session 2 weeks after the last session. In the massed-training condition, the 3 training sessions were in immediate succession on 1 day, whereas in the distributed-training condition, the 3 training sessions were spread over a 1-week period in an expanding schedule format.
Results
Statistical analyses showed that the children with normal language were unaffected by the training conditions, performing the SRT task similarly in both training conditions. The children with SLI, however, were affected by the training structure, performing the SRT task better when the training sessions were spaced over time rather than clustered on 1 day.
Conclusion
This study demonstrated that although intensive training does not increase learning in children with SLI, distributing training sessions over time does increase learning. The implications of these results on the learning abilities of children with SLI are discussed, as are the mechanisms involved in massed versus distributed learning.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2636/2653205/Distributed-Training-Enhances-Implicit-Sequence
via IFTTT

Short-Term Effect of Two Semi-Occluded Vocal Tract Training Programs on the Vocal Quality of Future Occupational Voice Users: “Resonant Voice Training Using Nasal Consonants” Versus “Straw Phonation”

Purpose
The purpose of this study was to determine the short-term effect of 2 semi-occluded vocal tract training programs, “resonant voice training using nasal consonants” versus “straw phonation,” on the vocal quality of vocally healthy future occupational voice users.
Method
A multigroup pretest–posttest randomized control group design was used. Thirty healthy speech-language pathology students with a mean age of 19 years (range: 17–22 years) were randomly assigned into a resonant voice training group (practicing resonant exercises across 6 weeks, n = 10), a straw phonation group (practicing straw phonation across 6 weeks, n = 10), or a control group (receiving no voice training, n = 10). A voice assessment protocol consisting of both subjective (questionnaire, participant's self-report, auditory–perceptual evaluation) and objective (maximum performance task, aerodynamic assessment, voice range profile, acoustic analysis, acoustic voice quality index, dysphonia severity index) measurements and determinations was used to evaluate the participants' voice pre- and posttraining. Groups were compared over time using linear mixed models and generalized linear mixed models. Within-group effects of time were determined using post hoc pairwise comparisons.
Results
No significant time × group interactions were found for any of the outcome measures, indicating no differences in evolution over time among the 3 groups. Within-group effects of time showed a significant improvement in dysphonia severity index in the resonant voice training group, and a significant improvement in the intensity range in the straw phonation group.
Conclusions
Results suggest that the semi-occluded vocal tract training programs using resonant voice training and straw phonation may have a positive impact on the vocal quality and vocal capacities of future occupational voice users. The resonant voice training caused an improved dysphonia severity index, and the straw phonation training caused an expansion of the intensity range in this population.

from #Audiology via xlomafota13 on Inoreader http://article/60/9/2519/2652563/ShortTerm-Effect-of-Two-SemiOccluded-Vocal-Tract
via IFTTT

Effects of Lexical Variables on Silent Reading Comprehension in Individuals With Aphasia: Evidence From Eye Tracking

Purpose
Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length.
Methods
IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements.
Results
IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls.
Conclusions
IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension.

from #Audiology via ola Kala on Inoreader http://article/60/9/2589/2653404/Effects-of-Lexical-Variables-on-Silent-Reading
via IFTTT

Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss

Purpose
The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids.
Method
Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances.
Results
Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance.
Conclusion
Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

from #Audiology via ola Kala on Inoreader http://article/60/9/2394/2648888/Automatic-Speech-Recognition-Predicts-Speech
via IFTTT

Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids

Purpose
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels—in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands—in listeners with hearing impairment using hearing aids.
Method
The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity.
Results
Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation.
Conclusion
Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

from #Audiology via ola Kala on Inoreader http://article/60/9/2687/2635215/Visual-Cues-Contribute-Differentially-to
via IFTTT

Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

Purpose
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition.
Method
Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004).
Results
The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech.
Conclusions
As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile.
Supplemental Materials
http://ift.tt/2xiwlv4

from #Audiology via ola Kala on Inoreader http://article/60/9/2406/2653957/Inner-Speechs-Relationship-With-Overt-Speech-in
via IFTTT

Training Peer Partners to Use a Speech-Generating Device With Classmates With Autism Spectrum Disorder: Exploring Communication Outcomes Across Preschool Contexts

Purpose
This study examined effects of a peer-mediated intervention that provided training on the use of a speech-generating device for preschoolers with severe autism spectrum disorder (ASD) and peer partners.
Method
Effects were examined using a multiple probe design across 3 children with ASD and limited to no verbal skills. Three peers without disabilities were taught to Stay, Play, and Talk using a GoTalk 4+ (Attainment Company) and were then paired up with a classmate with ASD in classroom social activities. Measures included rates of communication acts, communication mode and function, reciprocity, and engagement with peers.
Results
Following peer training, intervention effects were replicated across 3 peers, who all demonstrated an increased level and upward trend in communication acts to their classmates with ASD. Outcomes also revealed moderate intervention effects and increased levels of peer-directed communication for 3 children with ASD in classroom centers. Additional analyses revealed higher rates of communication in the added context of preferred toys and snack. The children with ASD also demonstrated improved communication reciprocity and peer engagement.
Conclusions
Results provide preliminary evidence on the benefits of combining peer-mediated and speech-generating device interventions to improve children's communication. Furthermore, it appears that preferred contexts are likely to facilitate greater communication and social engagement with peers.

from #Audiology via ola Kala on Inoreader http://article/60/9/2648/2653179/Training-Peer-Partners-to-Use-a-SpeechGenerating
via IFTTT

Indicators of Dysphagia in Aged Care Facilities

Purpose
The current cross-sectional study aimed to investigate risk factors for dysphagia in elderly individuals in aged care facilities.
Method
A total of 878 individuals from 42 aged care facilities were recruited for this study. The dependent outcome was speech therapist-determined swallowing function. Independent factors were Eating Assessment Tool score, oral motor assessment score, Mini-Mental State Examination, medical history, and various functional status ratings. Binomial logistic regression was used to identify independent variables associated with dysphagia in this cohort.
Results
Two statistical models were constructed. Model 1 used variables from case files without the need for hands-on assessment, and Model 2 used variables that could be obtained from hands-on assessment. Variables positively associated with dysphagia identified in Model 1 were male gender, total dependence for activities of daily living, need for feeding assistance, mobility, requiring assistance walking or using a wheelchair, and history of pneumonia. Variables positively associated with dysphagia identified in Model 2 were Mini-Mental State Examination score, edentulousness, and oral motor assessments score.
Conclusions
Cognitive function, dentition, and oral motor function are significant indicators associated with the presence of swallowing in the elderly. When assessing the frail elderly, case file information can help clinicians identify frail elderly individuals who may be suffering from dysphagia.

from #Audiology via ola Kala on Inoreader http://article/60/9/2416/2649235/Indicators-of-Dysphagia-in-Aged-Care-Facilities
via IFTTT

Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users

Purpose
To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users.
Method
Seventeen bimodal CI users (28–74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age.
Results
The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory.
Conclusions
Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.

from #Audiology via ola Kala on Inoreader http://article/60/9/2752/2653958/Speech-Recognition-and-Cognitive-Skills-in-Bimodal
via IFTTT

Alveolar and Postalveolar Voiceless Fricative and Affricate Productions of Spanish–English Bilingual Children With Cochlear Implants

Purpose
This study investigates the production of voiceless alveolar and postalveolar fricatives and affricates by bilingual and monolingual children with hearing loss who use cochlear implants (CIs) and their peers with normal hearing (NH).
Method
Fifty-four children participated in our study, including 12 Spanish–English bilingual CI users (M = 6;0 [years;months]), 12 monolingual English-speaking children with CIs (M = 6;1), 20 bilingual children with NH (M = 6;5), and 10 monolingual English-speaking children with NH (M = 5;10). Picture elicitation targeting /s/, /tʃ/, and /ʃ/ was administered. Repeated-measures analyses of variance comparing group means for frication duration, rise time, and centroid frequency were conducted for the effects of CI use and bilingualism.
Results
All groups distinguished the target sounds in the 3 acoustic parameters examined. Regarding frication duration and rise time, the Spanish productions of bilingual children with CIs differed from their bilingual peers with NH. English frication duration patterns for bilingual versus monolingual CI users also differed. Centroid frequency was a stronger place cue for children with NH than for children with CIs.
Conclusion
Patterns of fricative and affricate production display effects of bilingualism and diminished signal, yielding unique patterns for bilingual and monolingual CI users.

from #Audiology via ola Kala on Inoreader http://article/60/9/2427/2648980/Alveolar-and-Postalveolar-Voiceless-Fricative-and
via IFTTT

Input Subject Diversity Accelerates the Growth of Tense and Agreement: Indirect Benefits From a Parent-Implemented Intervention

Purpose
This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech.
Method
Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth.
Results
Instruction increased parent use of full is declaratives (ηp2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity.
Conclusions
These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system.

from #Audiology via ola Kala on Inoreader http://article/60/9/2619/2654122/Input-Subject-Diversity-Accelerates-the-Growth-of
via IFTTT

Swallowing Mechanics Associated With Artificial Airways, Bolus Properties, and Penetration–Aspiration Status in Trauma Patients

Purpose
Artificial airway procedures such as intubation and tracheotomy are common in the treatment of traumatic injuries, and bolus modifications may be implemented to help manage swallowing disorders. This study assessed artificial airway status, bolus properties (volume and viscosity), and the occurrence of laryngeal penetration and/or aspiration in relation to mechanical features of swallowing.
Method
Coordinates of anatomical landmarks were extracted at minimum and maximum hyolaryngeal excursion from 228 videofluoroscopic swallowing studies representing 69 traumatically injured U.S. military service members with dysphagia. Morphometric canonical variate and regression analyses examined associations between swallowing mechanics and bolus properties based on artificial airway and penetration–aspiration status.
Results
Significant differences in swallowing mechanics were detected between extubated versus tracheotomized (D = 1.32, p < .0001), extubated versus decannulated (D = 1.74, p < .0001), and decannulated versus tracheotomized (D = 1.24, p < .0001) groups per post hoc discriminant function analysis. Tracheotomy-in-situ and decannulated subgroups exhibited increased head/neck extension and posterior relocation of the larynx. Swallowing mechanics associated with (a) penetration–aspiration status and (b) bolus properties were moderately related for extubated and decannulated subgroups, but not the tracheotomized subgroup, per morphometric regression analysis.
Conclusion
Specific differences in swallowing mechanics associated with artificial airway status and certain bolus properties may guide therapeutic intervention in trauma-based dysphagia.

from #Audiology via ola Kala on Inoreader http://article/60/9/2442/2649304/Swallowing-Mechanics-Associated-With-Artificial
via IFTTT

Applying Item Response Theory to the Development of a Screening Adaptation of the Goldman-Fristoe Test of Articulation–Second Edition

Purpose
Item response theory (IRT) is a psychometric approach to measurement that uses latent trait abilities (e.g., speech sound production skills) to model performance on individual items that vary by difficulty and discrimination. An IRT analysis was applied to preschoolers' productions of the words on the Goldman-Fristoe Test of Articulation–Second Edition (GFTA-2) to identify candidates for a screening measure of speech sound production skills.
Method
The phoneme accuracies from 154 preschoolers, with speech skills on the GFTA-2 ranging from the 1st to above the 90th percentile, were analyzed with a 2-parameter logistic model.
Results
A total of 108 of the 232 phonemes from stimuli in the sounds-in-words subtest fit the IRT model. These phonemes, and subgroups of the most difficult of these phonemes, correlated significantly with the children's overall percentile scores on the GFTA-2. Regression equations calculated for the 5 and 10 most difficult phonemes predicted overall percentile score at levels commensurate with other screening measures.
Conclusions
These results suggest that speech production accuracy can be screened effectively with a small number of sounds. They motivate further research toward the development of a screening measure of children's speech sound production skills whose stimuli consist of a limited number of difficult phonemes.

from #Audiology via ola Kala on Inoreader http://article/60/9/2672/2653405/Applying-Item-Response-Theory-to-the-Development
via IFTTT

Modeling the Pathophysiology of Phonotraumatic Vocal Hyperfunction With a Triangular Glottal Model of the Vocal Folds

Purpose
Our goal was to test prevailing assumptions about the underlying biomechanical and aeroacoustic mechanisms associated with phonotraumatic lesions of the vocal folds using a numerical lumped-element model of voice production.
Method
A numerical model with a triangular glottis, posterior glottal opening, and arytenoid posturing is proposed. Normal voice is altered by introducing various prephonatory configurations. Potential compensatory mechanisms (increased subglottal pressure, muscle activation, and supraglottal constriction) are adjusted to restore an acoustic target output through a control loop that mimics a simplified version of auditory feedback.
Results
The degree of incomplete glottal closure in both the membranous and posterior portions of the folds consistently leads to a reduction in sound pressure level, fundamental frequency, harmonic richness, and harmonics-to-noise ratio. The compensatory mechanisms lead to significantly increased vocal-fold collision forces, maximum flow-declination rate, and amplitude of unsteady flow, without significantly altering the acoustic output.
Conclusion
Modeling provided potentially important insights into the pathophysiology of phonotraumatic vocal hyperfunction by demonstrating that compensatory mechanisms can counteract deterioration in the voice acoustic signal due to incomplete glottal closure, but this also leads to high vocal-fold collision forces (reflected in aerodynamic measures), which significantly increases the risk of developing phonotrauma.

from #Audiology via ola Kala on Inoreader http://article/60/9/2452/2652562/Modeling-the-Pathophysiology-of-Phonotraumatic
via IFTTT

The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise

Purpose
This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise.
Method
Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise.
Results
The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise.
Conclusions
Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.

from #Audiology via ola Kala on Inoreader http://article/60/9/2725/2648979/The-Effect-of-Dynamic-Pitch-on-Speech-Recognition
via IFTTT

Trans Male Voice in the First Year of Testosterone Therapy: Make No Assumptions

Purpose
The purpose of this study was to prospectively examine changes in gender-related voice domain of pitch measured by fundamental frequency, function-related domains of vocal quality, range, and habitual pitch level and the self-perceptions of transmasculine people during their first year of testosterone treatment.
Method
Seven trans men received 2 voice assessments at baseline and 1 assessment at 3, 6, 9, and 12 months after starting treatment.
Results
Vocal quality measures varied between and within participants but were generally within normal limits throughout the year. Mean fundamental frequency (MF0) during reading decreased, although to variable extents and rates. Phonation frequency range shifted down the scale, although it increased in some participants and decreased in others. Considering MF0 and phonation frequency range together in a measure of habitual pitch level revealed that the majority of participants spoke using an MF0 that was low within their range compared with cisgender norms. Although the trans men generally self-reported voice masculinization, it was not correlated with MF0, frequency range, or habitual pitch level at any time point or with MF0 note change from baseline to 1 year of testosterone treatment, but correlations should be interpreted with caution due to the heterogeneous responses of the 7 participants.
Conclusion
In trans men, consideration of voice deepening in the context of objective and subjective measures of voice can reveal unique profiles and inform patient care.

from #Audiology via ola Kala on Inoreader http://article/60/9/2472/2654123/Trans-Male-Voice-in-the-First-Year-of-Testosterone
via IFTTT

Relevance of the Implementation of Teeth in Three-Dimensional Vocal Tract Models

Purpose
Recently, efforts have been made to investigate the vocal tract using magnetic resonance imaging (MRI). Due to technical limitations, teeth were omitted in many previous studies on vocal tract acoustics. However, the knowledge of how teeth influence vocal tract acoustics might be important in order to estimate the necessity of implementing teeth in vocal tract models. The aim of this study was therefore to estimate the effect of teeth on vocal tract acoustics.
Method
The acoustic properties of 18 solid (3-dimensional printed) vocal tract models without teeth were compared to the same 18 models including teeth in terms of resonance frequencies (f Rn). The f Rn were obtained from the transfer functions of these models excited by white noise at the glottis level. The models were derived from MRI data of 2 trained singers performing 3 different vowel conditions (/i/, /a/, and /u/) in speech and low-pitched and high-pitched singing.
Results
Depending on the oral configuration, models exhibiting side cavities or side branches were characterized by major changes in the transfer function when teeth were implemented via the introduction of pole-zero pairs.
Conclusions
To avoid errors in modeling, teeth should be included in 3-dimensional vocal tract models for acoustic evaluation.
Supplemental Material
http://ift.tt/2wnkzL9

from #Audiology via ola Kala on Inoreader http://article/60/9/2379/2654188/Relevance-of-the-Implementation-of-Teeth-in
via IFTTT

How Stuttering Develops: The Multifactorial Dynamic Pathways Theory

Purpose
We advanced a multifactorial, dynamic account of the complex, nonlinear interactions of motor, linguistic, and emotional factors contributing to the development of stuttering. Our purpose here is to update our account as the multifactorial dynamic pathways theory.
Method
We review evidence related to how stuttering develops, including genetic/epigenetic factors; motor, linguistic, and emotional features; and advances in neuroimaging studies. We update evidence for our earlier claim: Although stuttering ultimately reflects impairment in speech sensorimotor processes, its course over the life span is strongly conditioned by linguistic and emotional factors.
Results
Our current account places primary emphasis on the dynamic developmental context in which stuttering emerges and follows its course during the preschool years. Rapid changes in many neurobehavioral systems are ongoing, and critical interactions among these systems likely play a major role in determining persistence of or recovery from stuttering.
Conclusion
Stuttering, or childhood onset fluency disorder (Diagnostic and Statistical Manual of Mental Disorders, 5th edition; American Psychiatric Association [APA], 2013), is a neurodevelopmental disorder that begins when neural networks supporting speech, language, and emotional functions are rapidly developing. The multifactorial dynamic pathways theory motivates experimental and clinical work to determine the specific factors that contribute to each child's pathway to the diagnosis of stuttering and those most likely to promote recovery.

from #Audiology via ola Kala on Inoreader http://article/60/9/2483/2652602/How-Stuttering-Develops-The-Multifactorial-Dynamic
via IFTTT

“Whatdunit?” Sentence Comprehension Abilities of Children With SLI: Sensitivity to Word Order in Canonical and Noncanonical Structures

Purpose
With Aim 1, we compared the comprehension of and sensitivity to canonical and noncanonical word order structures in school-age children with specific language impairment (SLI) and same-age typically developing (TD) children. Aim 2 centered on the developmental improvement of sentence comprehension in the groups. With Aim 3, we compared the comprehension error patterns of the groups.
Method
Using a “Whatdunit” agent selection task, 117 children with SLI and 117 TD children (ages 7:0–11:11, years:months) propensity matched on age, gender, mother's education, and family income pointed to the picture that best represented the agent in semantically implausible canonical structures (subject–verb–object, subject relative) and noncanonical structures (passive, object relative).
Results
The SLI group performed worse than the TD group across sentence types. TD children demonstrated developmental improvement across each sentence type, but children with SLI showed improvement only for canonical sentences. Both groups chose the object noun as agent significantly more often than the noun appearing in a prepositional phrase.
Conclusions
In the absence of semantic–pragmatic cues, comprehension of canonical and noncanonical sentences by children with SLI is limited, with noncanonical sentence comprehension being disproportionately limited. The children's ability to make proper semantic role assignments to the noun arguments in sentences, especially noncanonical, is significantly hindered.

from #Audiology via ola Kala on Inoreader http://article/60/9/2603/2652493/Whatdunit-Sentence-Comprehension-Abilities-of
via IFTTT

A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

Purpose
The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean).
Method
A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph.
Results
The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate.
Conclusions
The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types.

from #Audiology via ola Kala on Inoreader http://article/60/9/2506/2650812/A-CrossLanguage-Study-of-Acoustic-Predictors-of
via IFTTT

Distributed Training Enhances Implicit Sequence Acquisition in Children With Specific Language Impairment

Purpose
This study explored the effects of 2 different training structures on the implicit acquisition of a sequence in a serial reaction time (SRT) task in children with and without specific language impairment (SLI).
Method
All of the children underwent 3 training sessions, followed by a retention session 2 weeks after the last session. In the massed-training condition, the 3 training sessions were in immediate succession on 1 day, whereas in the distributed-training condition, the 3 training sessions were spread over a 1-week period in an expanding schedule format.
Results
Statistical analyses showed that the children with normal language were unaffected by the training conditions, performing the SRT task similarly in both training conditions. The children with SLI, however, were affected by the training structure, performing the SRT task better when the training sessions were spaced over time rather than clustered on 1 day.
Conclusion
This study demonstrated that although intensive training does not increase learning in children with SLI, distributing training sessions over time does increase learning. The implications of these results on the learning abilities of children with SLI are discussed, as are the mechanisms involved in massed versus distributed learning.

from #Audiology via ola Kala on Inoreader http://article/60/9/2636/2653205/Distributed-Training-Enhances-Implicit-Sequence
via IFTTT

Short-Term Effect of Two Semi-Occluded Vocal Tract Training Programs on the Vocal Quality of Future Occupational Voice Users: “Resonant Voice Training Using Nasal Consonants” Versus “Straw Phonation”

Purpose
The purpose of this study was to determine the short-term effect of 2 semi-occluded vocal tract training programs, “resonant voice training using nasal consonants” versus “straw phonation,” on the vocal quality of vocally healthy future occupational voice users.
Method
A multigroup pretest–posttest randomized control group design was used. Thirty healthy speech-language pathology students with a mean age of 19 years (range: 17–22 years) were randomly assigned into a resonant voice training group (practicing resonant exercises across 6 weeks, n = 10), a straw phonation group (practicing straw phonation across 6 weeks, n = 10), or a control group (receiving no voice training, n = 10). A voice assessment protocol consisting of both subjective (questionnaire, participant's self-report, auditory–perceptual evaluation) and objective (maximum performance task, aerodynamic assessment, voice range profile, acoustic analysis, acoustic voice quality index, dysphonia severity index) measurements and determinations was used to evaluate the participants' voice pre- and posttraining. Groups were compared over time using linear mixed models and generalized linear mixed models. Within-group effects of time were determined using post hoc pairwise comparisons.
Results
No significant time × group interactions were found for any of the outcome measures, indicating no differences in evolution over time among the 3 groups. Within-group effects of time showed a significant improvement in dysphonia severity index in the resonant voice training group, and a significant improvement in the intensity range in the straw phonation group.
Conclusions
Results suggest that the semi-occluded vocal tract training programs using resonant voice training and straw phonation may have a positive impact on the vocal quality and vocal capacities of future occupational voice users. The resonant voice training caused an improved dysphonia severity index, and the straw phonation training caused an expansion of the intensity range in this population.

from #Audiology via ola Kala on Inoreader http://article/60/9/2519/2652563/ShortTerm-Effect-of-Two-SemiOccluded-Vocal-Tract
via IFTTT

Effects of Lexical Variables on Silent Reading Comprehension in Individuals With Aphasia: Evidence From Eye Tracking

Purpose
Previous eye-tracking research has suggested that individuals with aphasia (IWA) do not assign syntactic structure on their first pass through a sentence during silent reading comprehension. The purpose of the present study was to investigate the time course with which lexical variables affect silent reading comprehension in IWA. Three lexical variables were investigated: word frequency, word class, and word length.
Methods
IWA and control participants without brain damage participated in the experiment. Participants read sentences while a camera tracked their eye movements.
Results
IWA showed effects of word class, word length, and word frequency that were similar to or greater than those observed in controls.
Conclusions
IWA showed sensitivity to lexical variables on the first pass through the sentence. The results are consistent with the view that IWA focus on lexical access on their first pass through a sentence and then work to build syntactic structure on subsequent passes. In addition, IWA showed very long rereading times and low skipping rates overall, which may contribute to some of the group differences in reading comprehension.

from #Audiology via ola Kala on Inoreader http://article/60/9/2589/2653404/Effects-of-Lexical-Variables-on-Silent-Reading
via IFTTT

Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss

Purpose
The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids.
Method
Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances.
Results
Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance.
Conclusion
Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

from #Audiology via ola Kala on Inoreader http://article/60/9/2394/2648888/Automatic-Speech-Recognition-Predicts-Speech
via IFTTT

Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids

Purpose
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels—in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands—in listeners with hearing impairment using hearing aids.
Method
The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity.
Results
Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation.
Conclusion
Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

from #Audiology via ola Kala on Inoreader http://article/60/9/2687/2635215/Visual-Cues-Contribute-Differentially-to
via IFTTT

Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

Purpose
Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition.
Method
Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004).
Results
The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech.
Conclusions
As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile.
Supplemental Materials
http://ift.tt/2xiwlv4

from #Audiology via ola Kala on Inoreader http://article/60/9/2406/2653957/Inner-Speechs-Relationship-With-Overt-Speech-in
via IFTTT

Training Peer Partners to Use a Speech-Generating Device With Classmates With Autism Spectrum Disorder: Exploring Communication Outcomes Across Preschool Contexts

Purpose
This study examined effects of a peer-mediated intervention that provided training on the use of a speech-generating device for preschoolers with severe autism spectrum disorder (ASD) and peer partners.
Method
Effects were examined using a multiple probe design across 3 children with ASD and limited to no verbal skills. Three peers without disabilities were taught to Stay, Play, and Talk using a GoTalk 4+ (Attainment Company) and were then paired up with a classmate with ASD in classroom social activities. Measures included rates of communication acts, communication mode and function, reciprocity, and engagement with peers.
Results
Following peer training, intervention effects were replicated across 3 peers, who all demonstrated an increased level and upward trend in communication acts to their classmates with ASD. Outcomes also revealed moderate intervention effects and increased levels of peer-directed communication for 3 children with ASD in classroom centers. Additional analyses revealed higher rates of communication in the added context of preferred toys and snack. The children with ASD also demonstrated improved communication reciprocity and peer engagement.
Conclusions
Results provide preliminary evidence on the benefits of combining peer-mediated and speech-generating device interventions to improve children's communication. Furthermore, it appears that preferred contexts are likely to facilitate greater communication and social engagement with peers.

from #Audiology via ola Kala on Inoreader http://article/60/9/2648/2653179/Training-Peer-Partners-to-Use-a-SpeechGenerating
via IFTTT

Indicators of Dysphagia in Aged Care Facilities

Purpose
The current cross-sectional study aimed to investigate risk factors for dysphagia in elderly individuals in aged care facilities.
Method
A total of 878 individuals from 42 aged care facilities were recruited for this study. The dependent outcome was speech therapist-determined swallowing function. Independent factors were Eating Assessment Tool score, oral motor assessment score, Mini-Mental State Examination, medical history, and various functional status ratings. Binomial logistic regression was used to identify independent variables associated with dysphagia in this cohort.
Results
Two statistical models were constructed. Model 1 used variables from case files without the need for hands-on assessment, and Model 2 used variables that could be obtained from hands-on assessment. Variables positively associated with dysphagia identified in Model 1 were male gender, total dependence for activities of daily living, need for feeding assistance, mobility, requiring assistance walking or using a wheelchair, and history of pneumonia. Variables positively associated with dysphagia identified in Model 2 were Mini-Mental State Examination score, edentulousness, and oral motor assessments score.
Conclusions
Cognitive function, dentition, and oral motor function are significant indicators associated with the presence of swallowing in the elderly. When assessing the frail elderly, case file information can help clinicians identify frail elderly individuals who may be suffering from dysphagia.

from #Audiology via ola Kala on Inoreader http://article/60/9/2416/2649235/Indicators-of-Dysphagia-in-Aged-Care-Facilities
via IFTTT

Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users

Purpose
To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users.
Method
Seventeen bimodal CI users (28–74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age.
Results
The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory.
Conclusions
Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.

from #Audiology via ola Kala on Inoreader http://article/60/9/2752/2653958/Speech-Recognition-and-Cognitive-Skills-in-Bimodal
via IFTTT

Alveolar and Postalveolar Voiceless Fricative and Affricate Productions of Spanish–English Bilingual Children With Cochlear Implants

Purpose
This study investigates the production of voiceless alveolar and postalveolar fricatives and affricates by bilingual and monolingual children with hearing loss who use cochlear implants (CIs) and their peers with normal hearing (NH).
Method
Fifty-four children participated in our study, including 12 Spanish–English bilingual CI users (M = 6;0 [years;months]), 12 monolingual English-speaking children with CIs (M = 6;1), 20 bilingual children with NH (M = 6;5), and 10 monolingual English-speaking children with NH (M = 5;10). Picture elicitation targeting /s/, /tʃ/, and /ʃ/ was administered. Repeated-measures analyses of variance comparing group means for frication duration, rise time, and centroid frequency were conducted for the effects of CI use and bilingualism.
Results
All groups distinguished the target sounds in the 3 acoustic parameters examined. Regarding frication duration and rise time, the Spanish productions of bilingual children with CIs differed from their bilingual peers with NH. English frication duration patterns for bilingual versus monolingual CI users also differed. Centroid frequency was a stronger place cue for children with NH than for children with CIs.
Conclusion
Patterns of fricative and affricate production display effects of bilingualism and diminished signal, yielding unique patterns for bilingual and monolingual CI users.

from #Audiology via ola Kala on Inoreader http://article/60/9/2427/2648980/Alveolar-and-Postalveolar-Voiceless-Fricative-and
via IFTTT

Input Subject Diversity Accelerates the Growth of Tense and Agreement: Indirect Benefits From a Parent-Implemented Intervention

Purpose
This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech.
Method
Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth.
Results
Instruction increased parent use of full is declaratives (ηp2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity.
Conclusions
These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system.

from #Audiology via ola Kala on Inoreader http://article/60/9/2619/2654122/Input-Subject-Diversity-Accelerates-the-Growth-of
via IFTTT

Swallowing Mechanics Associated With Artificial Airways, Bolus Properties, and Penetration–Aspiration Status in Trauma Patients

Purpose
Artificial airway procedures such as intubation and tracheotomy are common in the treatment of traumatic injuries, and bolus modifications may be implemented to help manage swallowing disorders. This study assessed artificial airway status, bolus properties (volume and viscosity), and the occurrence of laryngeal penetration and/or aspiration in relation to mechanical features of swallowing.
Method
Coordinates of anatomical landmarks were extracted at minimum and maximum hyolaryngeal excursion from 228 videofluoroscopic swallowing studies representing 69 traumatically injured U.S. military service members with dysphagia. Morphometric canonical variate and regression analyses examined associations between swallowing mechanics and bolus properties based on artificial airway and penetration–aspiration status.
Results
Significant differences in swallowing mechanics were detected between extubated versus tracheotomized (D = 1.32, p < .0001), extubated versus decannulated (D = 1.74, p < .0001), and decannulated versus tracheotomized (D = 1.24, p < .0001) groups per post hoc discriminant function analysis. Tracheotomy-in-situ and decannulated subgroups exhibited increased head/neck extension and posterior relocation of the larynx. Swallowing mechanics associated with (a) penetration–aspiration status and (b) bolus properties were moderately related for extubated and decannulated subgroups, but not the tracheotomized subgroup, per morphometric regression analysis.
Conclusion
Specific differences in swallowing mechanics associated with artificial airway status and certain bolus properties may guide therapeutic intervention in trauma-based dysphagia.

from #Audiology via ola Kala on Inoreader http://article/60/9/2442/2649304/Swallowing-Mechanics-Associated-With-Artificial
via IFTTT

Applying Item Response Theory to the Development of a Screening Adaptation of the Goldman-Fristoe Test of Articulation–Second Edition

Purpose
Item response theory (IRT) is a psychometric approach to measurement that uses latent trait abilities (e.g., speech sound production skills) to model performance on individual items that vary by difficulty and discrimination. An IRT analysis was applied to preschoolers' productions of the words on the Goldman-Fristoe Test of Articulation–Second Edition (GFTA-2) to identify candidates for a screening measure of speech sound production skills.
Method
The phoneme accuracies from 154 preschoolers, with speech skills on the GFTA-2 ranging from the 1st to above the 90th percentile, were analyzed with a 2-parameter logistic model.
Results
A total of 108 of the 232 phonemes from stimuli in the sounds-in-words subtest fit the IRT model. These phonemes, and subgroups of the most difficult of these phonemes, correlated significantly with the children's overall percentile scores on the GFTA-2. Regression equations calculated for the 5 and 10 most difficult phonemes predicted overall percentile score at levels commensurate with other screening measures.
Conclusions
These results suggest that speech production accuracy can be screened effectively with a small number of sounds. They motivate further research toward the development of a screening measure of children's speech sound production skills whose stimuli consist of a limited number of difficult phonemes.

from #Audiology via ola Kala on Inoreader http://article/60/9/2672/2653405/Applying-Item-Response-Theory-to-the-Development
via IFTTT

Modeling the Pathophysiology of Phonotraumatic Vocal Hyperfunction With a Triangular Glottal Model of the Vocal Folds

Purpose
Our goal was to test prevailing assumptions about the underlying biomechanical and aeroacoustic mechanisms associated with phonotraumatic lesions of the vocal folds using a numerical lumped-element model of voice production.
Method
A numerical model with a triangular glottis, posterior glottal opening, and arytenoid posturing is proposed. Normal voice is altered by introducing various prephonatory configurations. Potential compensatory mechanisms (increased subglottal pressure, muscle activation, and supraglottal constriction) are adjusted to restore an acoustic target output through a control loop that mimics a simplified version of auditory feedback.
Results
The degree of incomplete glottal closure in both the membranous and posterior portions of the folds consistently leads to a reduction in sound pressure level, fundamental frequency, harmonic richness, and harmonics-to-noise ratio. The compensatory mechanisms lead to significantly increased vocal-fold collision forces, maximum flow-declination rate, and amplitude of unsteady flow, without significantly altering the acoustic output.
Conclusion
Modeling provided potentially important insights into the pathophysiology of phonotraumatic vocal hyperfunction by demonstrating that compensatory mechanisms can counteract deterioration in the voice acoustic signal due to incomplete glottal closure, but this also leads to high vocal-fold collision forces (reflected in aerodynamic measures), which significantly increases the risk of developing phonotrauma.

from #Audiology via ola Kala on Inoreader http://article/60/9/2452/2652562/Modeling-the-Pathophysiology-of-Phonotraumatic
via IFTTT

The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise

Purpose
This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise.
Method
Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise.
Results
The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise.
Conclusions
Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.

from #Audiology via ola Kala on Inoreader http://article/60/9/2725/2648979/The-Effect-of-Dynamic-Pitch-on-Speech-Recognition
via IFTTT

Trans Male Voice in the First Year of Testosterone Therapy: Make No Assumptions

Purpose
The purpose of this study was to prospectively examine changes in gender-related voice domain of pitch measured by fundamental frequency, function-related domains of vocal quality, range, and habitual pitch level and the self-perceptions of transmasculine people during their first year of testosterone treatment.
Method
Seven trans men received 2 voice assessments at baseline and 1 assessment at 3, 6, 9, and 12 months after starting treatment.
Results
Vocal quality measures varied between and within participants but were generally within normal limits throughout the year. Mean fundamental frequency (MF0) during reading decreased, although to variable extents and rates. Phonation frequency range shifted down the scale, although it increased in some participants and decreased in others. Considering MF0 and phonation frequency range together in a measure of habitual pitch level revealed that the majority of participants spoke using an MF0 that was low within their range compared with cisgender norms. Although the trans men generally self-reported voice masculinization, it was not correlated with MF0, frequency range, or habitual pitch level at any time point or with MF0 note change from baseline to 1 year of testosterone treatment, but correlations should be interpreted with caution due to the heterogeneous responses of the 7 participants.
Conclusion
In trans men, consideration of voice deepening in the context of objective and subjective measures of voice can reveal unique profiles and inform patient care.

from #Audiology via ola Kala on Inoreader http://article/60/9/2472/2654123/Trans-Male-Voice-in-the-First-Year-of-Testosterone
via IFTTT