Πέμπτη 23 Ιουνίου 2016

The Effect of Auditory Information on Patterns of Intrusions and Reductions

Purpose
The study investigates whether auditory information affects the nature of intrusion and reduction errors in reiterated speech. These errors are hypothesized to arise as a consequence of autonomous mechanisms to stabilize movement coordination. The specific question addressed is whether this process is affected by auditory information so that it will influence the occurrence of intrusions and reductions.
Methods
Fifteen speakers produced word pairs with alternating onset consonants and identical rhymes repetitively at a normal and fast speaking rate, in masked and unmasked speech. Movement ranges of the tongue tip, tongue dorsum, and lower lip during onset consonants were retrieved from kinematic data collected with electromagnetic articulography. Reductions and intrusions were defined as statistical outliers from movement range distributions of target and nontarget articulators, respectively.
Results
Regardless of masking condition, the number of intrusions and reductions increased during the course of a trial, suggesting movement stabilization. However, compared with unmasked speech, speakers made fewer intrusions in masked speech. The number of reductions was not significantly affected.
Conclusions
Masking of auditory information resulted in fewer intrusions, suggesting that speakers were able to pay closer attention to their articulatory movements. This highlights a possible stabilizing role for proprioceptive information in speech movement coordination.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJEoV
via IFTTT

Return to Work and Social Communication Ability Following Severe Traumatic Brain Injury

Purpose
Return to competitive employment presents a major challenge to adults who survive traumatic brain injury (TBI). This study was undertaken to better understand factors that shape employment outcome by comparing the communication profiles and self-awareness of communication deficits of adults who return to and maintain employment with those who do not.
Method
Forty-six dyads (46 adults with TBI, 46 relatives) were recruited into 2 groups based on the current employment status (employed or unemployed) of participants with TBI. Groups did not differ in regard to sex, age, education, preinjury employment, injury severity, or time postinjury. The La Trobe Communication Questionnaire (Douglas, O'Flaherty, & Snow, 2000) was used to measure communication. Group comparisons on La Trobe Communication Questionnaire scores were analyzed by using mixed 2 × 2 analysis of variance (between factor: employment status; within factor: source of perception).
Results
Analysis yielded a significant group main effect (p = .002) and a significant interaction (p = .004). The employed group reported less frequent difficulties (self and relatives). Consistent with the interaction, unemployed participants perceived themselves to have less frequent difficulties than their relatives perceived, whereas employed participants reported more frequent difficulties than their relatives.
Conclusion
Communication outcome and awareness of communication deficits play an important role in reintegration to the workplace following TBI.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AQuR
via IFTTT

Treating Speech Comprehensibility in Students With Down Syndrome

Purpose
This study examined whether a particular type of therapy (Broad Target Speech Recasts, BTSR) was superior to a contrast treatment in facilitating speech comprehensibility in conversations of students with Down syndrome who began treatment with initially high verbal imitation.
Method
We randomly assigned 51 5- to 12-year-old students to either BTSR or a contrast treatment. Therapy occurred in hour-long 1-to-1 sessions in students' schools twice per week for 6 months.
Results
For students who entered treatment just above the sample average in verbal-imitation skill, BTSR was superior to the contrast treatment in facilitating the growth of speech comprehensibility in conversational samples. The number of speech recasts mediated or explained the BTSR treatment effect on speech comprehensibility.
Conclusion
Speech comprehensibility is malleable in school-age students with Down syndrome. BTSR facilitates comprehensibility in students with just above the sample average level of verbal imitation prior to treatment. Speech recasts in BTSR are largely responsible for this effect.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AXqB
via IFTTT

Repair or Violation Detection? Pre-Attentive Processing Strategies of Phonotactic Illegality Demonstrated on the Constraint of g-Deletion in German

Purpose
Effects of categorical phonotactic knowledge on pre-attentive speech processing were investigated by presenting illegal speech input that violated a phonotactic constraint in German called “g-deletion.” The present study aimed to extend previous findings of automatic processing of phonotactic violations and to investigate the role of stimulus context in triggering either an automatic phonotactic repair or a detection of the violation.
Method
The mismatch negativity event-related potential component was obtained in 2 identical cross-sectional experiments with speaker variation and 16 healthy adult participants each. Four pseudowords were used as stimuli, 3 of them phonotactically legal and 1 illegal. Stimuli were contrasted pairwise in passive oddball conditions and presented binaurally via headphones. Results were analyzed by means of mixed design analyses of variance.
Results
Phonotactically illegal stimuli were found to be processed differently compared to legal ones. Results indicate evidence for both automatic repair and detection of the phonotactic violation depending on the linguistic context the illegal stimulus was embedded in.
Conclusions
These findings corroborate notions that categorical phonotactic knowledge is activated and applied even in the absence of attention. Thus, our findings contribute to the general understanding of sublexical phonological processing and may be of use for further developing speech recognition models.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJPke
via IFTTT

Measuring Speech Comprehensibility in Students with Down Syndrome

Purpose
There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome.
Method
Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility.
Results
Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples.
Conclusion
Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AZia
via IFTTT

Does Working Memory Enhance or Interfere With Speech Fluency in Adults Who Do and Do Not Stutter? Evidence From a Dual-Task Paradigm

Purpose
The present study examined whether engaging working memory in a secondary task benefits speech fluency. Effects of dual-task conditions on speech fluency, rate, and errors were examined with respect to predictions derived from three related theoretical accounts of disfluencies.
Method
Nineteen adults who stutter and twenty adults who do not stutter participated in the study. All participants completed 2 baseline tasks: a continuous-speaking task and a working-memory (WM) task involving manipulations of domain, load, and interstimulus interval. In the dual-task portion of the experiment, participants simultaneously performed the speaking task with each unique combination of WM conditions.
Results
All speakers showed similar fluency benefits and decrements in WM accuracy as a result of dual-task conditions. Fluency effects were specific to atypical forms of disfluency and were comparable across WM-task manipulations. Changes in fluency were accompanied by reductions in speaking rate but not by corresponding changes in overt errors.
Conclusions
Findings suggest that WM contributes to disfluencies regardless of stuttering status and that engaging WM resources while speaking enhances fluency. Further research is needed to verify the cognitive mechanism involved in this effect and to determine how these findings can best inform clinical intervention.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJI8d
via IFTTT

Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images

Purpose
Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time.
Method
The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm.
Results
Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed.
Conclusion
Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AS6g
via IFTTT

Narratives in Two Languages: Storytelling of Bilingual Cantonese–English Preschoolers

Purpose
The aim of this study was to compare narratives generated by 4-year-old and 5-year-old children who were bilingual in English and Cantonese.
Method
The sample included 47 children (23 who were 4 years old and 24 who were 5 years old) living in Toronto, Ontario, Canada, who spoke both Cantonese and English. The participants spoke and heard predominantly Cantonese in the home. Participants generated a story in English and Cantonese by using a wordless picture book; language order was counterbalanced. Data were transcribed and coded for story grammar, morphosyntactic quality, mean length of utterance in words, and the number of different words.
Results
Repeated measures analysis of variance revealed higher story grammar scores in English than in Cantonese, but no other significant main effects of language were observed. Analyses also revealed that older children had higher story grammar, mean length of utterance in words, and morphosyntactic quality scores than younger children in both languages. Hierarchical regressions indicated that Cantonese story grammar predicted English story grammar and Cantonese microstructure predicted English microstructure. However, no correlation was observed between Cantonese and English morphosyntactic quality.
Conclusions
The results of this study have implications for speech-language pathologists who collect narratives in Cantonese and English from bilingual preschoolers. The results suggest that there is a possible transfer in narrative abilities between the two languages.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJ76u
via IFTTT

On Peer Review

Purpose
This letter briefly reviews ideas about the purpose and benefits of peer review and reaches some idealistic conclusions about the process.
Method
The author uses both literature review and meditation born of long experience.
Results
From a cynical perspective, peer review constitutes an adversarial process featuring domination of the weak by the strong and exploitation of authors and reviewers by editors and publishers, resulting in suppression of new ideas, delayed publication of important research, and bad feelings ranging from confusion to fury. More optimistically, peer review can be viewed as a system in which reviewers and editors volunteer thousands of hours to work together with authors, to the end of furthering human knowledge.
Conclusion
Editors and authors will encounter both peer-review cynics and idealists in their careers, but in the author's experience the second are far more prevalent. Reviewers and editors can help increase the positive benefits of peer review (and improve the culture of science) by viewing the system as one in which they work with authors on behalf of high-quality publications and better science. Authors can contribute by preparing papers carefully prior to submission and by interpreting reviewers' and editors' suggestions in this collegial spirit, however difficult this may be in some cases.

from #Audiology via ola Kala on Inoreader http://ift.tt/28T3gdA
via IFTTT

The Use of Voice Cues for Speaker Gender Recognition in Cochlear Implant Recipients

Purpose
The focus of this study was to examine the influence of fundamental frequency (F0) and vocal tract length (VTL) modifications on speaker gender recognition in cochlear implant (CI) recipients for different stimulus types.
Method
Single words and sentences were manipulated using isolated or combined F0 and VTL cues. Using an 11-point rating scale, CI recipients and listeners with normal hearing rated the maleness/femaleness of the corresponding voice.
Results
Speaker gender ratings for combined F0 and VTL modifications were similar across all stimulus types in both CI recipients and listeners with normal hearing, although the CI recipients showed a somewhat larger ambiguity. In contrast to listeners with normal hearing, F0-VTL and F0-only modifications revealed similar ratings in the CI recipients when using words as stimuli. However, when sentences were used, a difference was found between F0-VTL–based and F0-based ratings. Modifying VTL cues alone did not affect ratings in the CI group.
Conclusions
Whereas speaker gender ratings by listeners with normal hearing relied on combined VTL and F0 cues, CI recipients made only limited use of VTL cues, which might be one reason behind problems with identifying the speaker on the basis of voice. However, use of the voice cues depended on stimulus type, with the greater information in sentences allowing a more detailed analysis than single words in both listener groups.

from #Audiology via ola Kala on Inoreader http://ift.tt/291zocb
via IFTTT

Embedded Instruction Improves Vocabulary Learning During Automated Storybook Reading Among High-Risk Preschoolers

Purpose
We investigated a small-group intervention designed to teach vocabulary and comprehension skills to preschoolers who were at risk for language and reading disabilities. These language skills are important and reliable predictors of later academic achievement.
Method
Preschoolers heard prerecorded stories 3 times per week over the course of a school year. A cluster randomized design was used to evaluate the effects of hearing storybooks with and without embedded vocabulary and comprehension lessons. A total of 32 classrooms were randomly assigned to experimental and comparison conditions. Approximately 6 children per classroom demonstrating low vocabulary knowledge, totaling 195 children, were enrolled.
Results
Preschoolers in the comparison condition did not learn novel, challenging vocabulary words to which they were exposed in story contexts, whereas preschoolers receiving embedded lessons demonstrated significant learning gains, although vocabulary learning diminished over the course of the school year. Modest gains in comprehension skills did not differ between the two groups.
Conclusion
The Story Friends curriculum appears to be highly feasible for delivery in early childhood educational settings and effective at teaching challenging vocabulary to high-risk preschoolers.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJ8Yl
via IFTTT

On Older Listeners' Ability to Perceive Dynamic Pitch

Purpose
Natural speech comes with variation in pitch, which serves as an important cue for speech recognition. The present study investigated older listeners' dynamic pitch perception with a focus on interindividual variability. In particular, we asked whether some of the older listeners' inability to perceive dynamic pitch stems from the higher susceptibility to the interference from formant changes.
Method
A total of 22 older listeners and 21 younger controls with at least near-typical hearing were tested on dynamic pitch identification and discrimination tasks using synthetic monophthong and diphthong vowels.
Results
The older listeners' ability to detect changes in pitch varied substantially, even when musical and linguistic experiences were controlled. The influence of formant patterns on dynamic pitch perception was evident in both groups of listeners. Overall, strong pitch contours (i.e., more dynamic) were perceived better than weak pitch contours (i.e., more monotonic), particularly with rising pitch patterns.
Conclusions
The findings are in accordance with the literature demonstrating some older individuals' difficulty perceiving dynamic pitch cues in speech. Moreover, they suggest that this problem may be prominent when the dynamic pitch is carried by natural speech and when the pitch contour is not strong.

from #Audiology via ola Kala on Inoreader http://ift.tt/291zvVf
via IFTTT

Continuous Performance Tasks: Not Just About Sustaining Attention

Purpose
Continuous performance tasks (CPTs) are used to measure individual differences in sustained attention. Many different stimuli have been used as response targets without consideration of their impact on task performance. Here, we compared CPT performance in typically developing adults and children to assess the role of stimulus processing on error rates and reaction times.
Method
Participants completed a CPT that was based on response to infrequent targets, while monitoring and withholding responses to regular nontargets. Performance on 3 stimulus conditions was compared: visual letters (X and O), their auditory analogs, and auditory pure tones.
Results
Adults showed no difference in error propensity across the 3 conditions but had slower reaction times for auditory stimuli. Children had slower overall reaction times. They responded most quickly to the visual target and most slowly to the tone target. They also made more errors in the tone condition than in either the visual or the auditory spoken CPT conditions.
Conclusions
The results suggest error propensity and reaction time variations on CPTs cannot solely be interpreted as evidence of inattention. They also reflect stimulus-specific influences that must be considered when testing hypotheses about modality-specific deficits in sustained attention in populations with different developmental disorders.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJ8rk
via IFTTT

Prevalence and Nature of Hearing Loss in 22q11.2 Deletion Syndrome

Purpose
The purpose of this study was to clarify the prevalence, type, severity, and age-dependency of hearing loss in 22q11.2 deletion syndrome.
Method
Extensive audiological measurements were conducted in 40 persons with proven 22q11.2 deletion (aged 6–36 years). Besides air and bone conduction thresholds in the frequency range between 0.125 and 8.000 kHz, high-frequency thresholds up to 16.000 kHz were determined and tympanometry, acoustic reflex (AR) measurement, and distortion product otoacoustic emission (DPOAE) testing were performed.
Results
Hearing loss was identified in 59% of the tested ears and was mainly conductive in nature. In addition, a high-frequency sensorineural hearing loss with down-sloping curve was found in the majority of patients. Aberrant tympanometric results were recorded in 39% of the ears. In 85% of ears with a Type A or C tympanometric peak, ARs were absent. A DPOAE response in at least 6 frequencies was present in only 23% of the ears with a hearing threshold ≤30 dB HL. In patients above 14 years of age, there was a significantly lower percentage of measurable DPOAEs.
Conclusion
Hearing loss in 22q11.2 deletion syndrome is highly prevalent and both conductive and high-frequency sensorineural in nature. The age-dependent absence of DPOAEs in 22q11.2 deletion syndrome suggests cochlear damage underlying the high-frequency hearing loss.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJDBB
via IFTTT

Story Goodness in Adolescents With Autism Spectrum Disorder (ASD) and in Optimal Outcomes From ASD

Purpose
This study examined narrative quality of adolescents with autism spectrum disorder (ASD) using a well-studied “story goodness” coding system.
Method
Narrative samples were analyzed for distinct aspects of story goodness and rated by naïve readers on dimensions of story goodness, accuracy, cohesiveness, and oddness. Adolescents with high-functioning ASD were compared with adolescents with typical development (TD; n = 15 per group). A second study compared narratives from adolescents across three groups: ASD, TD, and youths with “optimal outcomes,” who were diagnosed with ASD early in development but no longer meet criteria for ASD and have typical behavioral functioning.
Results
In both studies, the ASD group's narratives had lower composite quality scores compared with peers with typical development. In Study 2, narratives from the optimal outcomes group were intermediate in scores and did not differ significantly from those of either other group. However, naïve raters were able to detect qualitative narrative differences across groups.
Conclusions
Findings indicate that pragmatic deficits in ASD are salient and could have clinical relevance. Furthermore, results indicate subtle differences in pragmatic language skills for individuals with optimal outcomes despite otherwise typical language skills in other domains. These results highlight the need for clinical interventions tailored to the specific deficits of these populations.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291AMeI
via IFTTT

The Effect of Auditory Information on Patterns of Intrusions and Reductions

Purpose
The study investigates whether auditory information affects the nature of intrusion and reduction errors in reiterated speech. These errors are hypothesized to arise as a consequence of autonomous mechanisms to stabilize movement coordination. The specific question addressed is whether this process is affected by auditory information so that it will influence the occurrence of intrusions and reductions.
Methods
Fifteen speakers produced word pairs with alternating onset consonants and identical rhymes repetitively at a normal and fast speaking rate, in masked and unmasked speech. Movement ranges of the tongue tip, tongue dorsum, and lower lip during onset consonants were retrieved from kinematic data collected with electromagnetic articulography. Reductions and intrusions were defined as statistical outliers from movement range distributions of target and nontarget articulators, respectively.
Results
Regardless of masking condition, the number of intrusions and reductions increased during the course of a trial, suggesting movement stabilization. However, compared with unmasked speech, speakers made fewer intrusions in masked speech. The number of reductions was not significantly affected.
Conclusions
Masking of auditory information resulted in fewer intrusions, suggesting that speakers were able to pay closer attention to their articulatory movements. This highlights a possible stabilizing role for proprioceptive information in speech movement coordination.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJEoV
via IFTTT

Return to Work and Social Communication Ability Following Severe Traumatic Brain Injury

Purpose
Return to competitive employment presents a major challenge to adults who survive traumatic brain injury (TBI). This study was undertaken to better understand factors that shape employment outcome by comparing the communication profiles and self-awareness of communication deficits of adults who return to and maintain employment with those who do not.
Method
Forty-six dyads (46 adults with TBI, 46 relatives) were recruited into 2 groups based on the current employment status (employed or unemployed) of participants with TBI. Groups did not differ in regard to sex, age, education, preinjury employment, injury severity, or time postinjury. The La Trobe Communication Questionnaire (Douglas, O'Flaherty, & Snow, 2000) was used to measure communication. Group comparisons on La Trobe Communication Questionnaire scores were analyzed by using mixed 2 × 2 analysis of variance (between factor: employment status; within factor: source of perception).
Results
Analysis yielded a significant group main effect (p = .002) and a significant interaction (p = .004). The employed group reported less frequent difficulties (self and relatives). Consistent with the interaction, unemployed participants perceived themselves to have less frequent difficulties than their relatives perceived, whereas employed participants reported more frequent difficulties than their relatives.
Conclusion
Communication outcome and awareness of communication deficits play an important role in reintegration to the workplace following TBI.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291AQuR
via IFTTT

Treating Speech Comprehensibility in Students With Down Syndrome

Purpose
This study examined whether a particular type of therapy (Broad Target Speech Recasts, BTSR) was superior to a contrast treatment in facilitating speech comprehensibility in conversations of students with Down syndrome who began treatment with initially high verbal imitation.
Method
We randomly assigned 51 5- to 12-year-old students to either BTSR or a contrast treatment. Therapy occurred in hour-long 1-to-1 sessions in students' schools twice per week for 6 months.
Results
For students who entered treatment just above the sample average in verbal-imitation skill, BTSR was superior to the contrast treatment in facilitating the growth of speech comprehensibility in conversational samples. The number of speech recasts mediated or explained the BTSR treatment effect on speech comprehensibility.
Conclusion
Speech comprehensibility is malleable in school-age students with Down syndrome. BTSR facilitates comprehensibility in students with just above the sample average level of verbal imitation prior to treatment. Speech recasts in BTSR are largely responsible for this effect.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291AXqB
via IFTTT

Repair or Violation Detection? Pre-Attentive Processing Strategies of Phonotactic Illegality Demonstrated on the Constraint of g-Deletion in German

Purpose
Effects of categorical phonotactic knowledge on pre-attentive speech processing were investigated by presenting illegal speech input that violated a phonotactic constraint in German called “g-deletion.” The present study aimed to extend previous findings of automatic processing of phonotactic violations and to investigate the role of stimulus context in triggering either an automatic phonotactic repair or a detection of the violation.
Method
The mismatch negativity event-related potential component was obtained in 2 identical cross-sectional experiments with speaker variation and 16 healthy adult participants each. Four pseudowords were used as stimuli, 3 of them phonotactically legal and 1 illegal. Stimuli were contrasted pairwise in passive oddball conditions and presented binaurally via headphones. Results were analyzed by means of mixed design analyses of variance.
Results
Phonotactically illegal stimuli were found to be processed differently compared to legal ones. Results indicate evidence for both automatic repair and detection of the phonotactic violation depending on the linguistic context the illegal stimulus was embedded in.
Conclusions
These findings corroborate notions that categorical phonotactic knowledge is activated and applied even in the absence of attention. Thus, our findings contribute to the general understanding of sublexical phonological processing and may be of use for further developing speech recognition models.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJPke
via IFTTT

Measuring Speech Comprehensibility in Students with Down Syndrome

Purpose
There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome.
Method
Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility.
Results
Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples.
Conclusion
Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291AZia
via IFTTT

Does Working Memory Enhance or Interfere With Speech Fluency in Adults Who Do and Do Not Stutter? Evidence From a Dual-Task Paradigm

Purpose
The present study examined whether engaging working memory in a secondary task benefits speech fluency. Effects of dual-task conditions on speech fluency, rate, and errors were examined with respect to predictions derived from three related theoretical accounts of disfluencies.
Method
Nineteen adults who stutter and twenty adults who do not stutter participated in the study. All participants completed 2 baseline tasks: a continuous-speaking task and a working-memory (WM) task involving manipulations of domain, load, and interstimulus interval. In the dual-task portion of the experiment, participants simultaneously performed the speaking task with each unique combination of WM conditions.
Results
All speakers showed similar fluency benefits and decrements in WM accuracy as a result of dual-task conditions. Fluency effects were specific to atypical forms of disfluency and were comparable across WM-task manipulations. Changes in fluency were accompanied by reductions in speaking rate but not by corresponding changes in overt errors.
Conclusions
Findings suggest that WM contributes to disfluencies regardless of stuttering status and that engaging WM resources while speaking enhances fluency. Further research is needed to verify the cognitive mechanism involved in this effect and to determine how these findings can best inform clinical intervention.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJI8d
via IFTTT

Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images

Purpose
Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time.
Method
The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm.
Results
Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed.
Conclusion
Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291AS6g
via IFTTT

Narratives in Two Languages: Storytelling of Bilingual Cantonese–English Preschoolers

Purpose
The aim of this study was to compare narratives generated by 4-year-old and 5-year-old children who were bilingual in English and Cantonese.
Method
The sample included 47 children (23 who were 4 years old and 24 who were 5 years old) living in Toronto, Ontario, Canada, who spoke both Cantonese and English. The participants spoke and heard predominantly Cantonese in the home. Participants generated a story in English and Cantonese by using a wordless picture book; language order was counterbalanced. Data were transcribed and coded for story grammar, morphosyntactic quality, mean length of utterance in words, and the number of different words.
Results
Repeated measures analysis of variance revealed higher story grammar scores in English than in Cantonese, but no other significant main effects of language were observed. Analyses also revealed that older children had higher story grammar, mean length of utterance in words, and morphosyntactic quality scores than younger children in both languages. Hierarchical regressions indicated that Cantonese story grammar predicted English story grammar and Cantonese microstructure predicted English microstructure. However, no correlation was observed between Cantonese and English morphosyntactic quality.
Conclusions
The results of this study have implications for speech-language pathologists who collect narratives in Cantonese and English from bilingual preschoolers. The results suggest that there is a possible transfer in narrative abilities between the two languages.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJ76u
via IFTTT

On Peer Review

Purpose
This letter briefly reviews ideas about the purpose and benefits of peer review and reaches some idealistic conclusions about the process.
Method
The author uses both literature review and meditation born of long experience.
Results
From a cynical perspective, peer review constitutes an adversarial process featuring domination of the weak by the strong and exploitation of authors and reviewers by editors and publishers, resulting in suppression of new ideas, delayed publication of important research, and bad feelings ranging from confusion to fury. More optimistically, peer review can be viewed as a system in which reviewers and editors volunteer thousands of hours to work together with authors, to the end of furthering human knowledge.
Conclusion
Editors and authors will encounter both peer-review cynics and idealists in their careers, but in the author's experience the second are far more prevalent. Reviewers and editors can help increase the positive benefits of peer review (and improve the culture of science) by viewing the system as one in which they work with authors on behalf of high-quality publications and better science. Authors can contribute by preparing papers carefully prior to submission and by interpreting reviewers' and editors' suggestions in this collegial spirit, however difficult this may be in some cases.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28T3gdA
via IFTTT

The Use of Voice Cues for Speaker Gender Recognition in Cochlear Implant Recipients

Purpose
The focus of this study was to examine the influence of fundamental frequency (F0) and vocal tract length (VTL) modifications on speaker gender recognition in cochlear implant (CI) recipients for different stimulus types.
Method
Single words and sentences were manipulated using isolated or combined F0 and VTL cues. Using an 11-point rating scale, CI recipients and listeners with normal hearing rated the maleness/femaleness of the corresponding voice.
Results
Speaker gender ratings for combined F0 and VTL modifications were similar across all stimulus types in both CI recipients and listeners with normal hearing, although the CI recipients showed a somewhat larger ambiguity. In contrast to listeners with normal hearing, F0-VTL and F0-only modifications revealed similar ratings in the CI recipients when using words as stimuli. However, when sentences were used, a difference was found between F0-VTL–based and F0-based ratings. Modifying VTL cues alone did not affect ratings in the CI group.
Conclusions
Whereas speaker gender ratings by listeners with normal hearing relied on combined VTL and F0 cues, CI recipients made only limited use of VTL cues, which might be one reason behind problems with identifying the speaker on the basis of voice. However, use of the voice cues depended on stimulus type, with the greater information in sentences allowing a more detailed analysis than single words in both listener groups.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291zocb
via IFTTT

Embedded Instruction Improves Vocabulary Learning During Automated Storybook Reading Among High-Risk Preschoolers

Purpose
We investigated a small-group intervention designed to teach vocabulary and comprehension skills to preschoolers who were at risk for language and reading disabilities. These language skills are important and reliable predictors of later academic achievement.
Method
Preschoolers heard prerecorded stories 3 times per week over the course of a school year. A cluster randomized design was used to evaluate the effects of hearing storybooks with and without embedded vocabulary and comprehension lessons. A total of 32 classrooms were randomly assigned to experimental and comparison conditions. Approximately 6 children per classroom demonstrating low vocabulary knowledge, totaling 195 children, were enrolled.
Results
Preschoolers in the comparison condition did not learn novel, challenging vocabulary words to which they were exposed in story contexts, whereas preschoolers receiving embedded lessons demonstrated significant learning gains, although vocabulary learning diminished over the course of the school year. Modest gains in comprehension skills did not differ between the two groups.
Conclusion
The Story Friends curriculum appears to be highly feasible for delivery in early childhood educational settings and effective at teaching challenging vocabulary to high-risk preschoolers.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJ8Yl
via IFTTT

On Older Listeners' Ability to Perceive Dynamic Pitch

Purpose
Natural speech comes with variation in pitch, which serves as an important cue for speech recognition. The present study investigated older listeners' dynamic pitch perception with a focus on interindividual variability. In particular, we asked whether some of the older listeners' inability to perceive dynamic pitch stems from the higher susceptibility to the interference from formant changes.
Method
A total of 22 older listeners and 21 younger controls with at least near-typical hearing were tested on dynamic pitch identification and discrimination tasks using synthetic monophthong and diphthong vowels.
Results
The older listeners' ability to detect changes in pitch varied substantially, even when musical and linguistic experiences were controlled. The influence of formant patterns on dynamic pitch perception was evident in both groups of listeners. Overall, strong pitch contours (i.e., more dynamic) were perceived better than weak pitch contours (i.e., more monotonic), particularly with rising pitch patterns.
Conclusions
The findings are in accordance with the literature demonstrating some older individuals' difficulty perceiving dynamic pitch cues in speech. Moreover, they suggest that this problem may be prominent when the dynamic pitch is carried by natural speech and when the pitch contour is not strong.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/291zvVf
via IFTTT

Prevalence and Nature of Hearing Loss in 22q11.2 Deletion Syndrome

Purpose
The purpose of this study was to clarify the prevalence, type, severity, and age-dependency of hearing loss in 22q11.2 deletion syndrome.
Method
Extensive audiological measurements were conducted in 40 persons with proven 22q11.2 deletion (aged 6–36 years). Besides air and bone conduction thresholds in the frequency range between 0.125 and 8.000 kHz, high-frequency thresholds up to 16.000 kHz were determined and tympanometry, acoustic reflex (AR) measurement, and distortion product otoacoustic emission (DPOAE) testing were performed.
Results
Hearing loss was identified in 59% of the tested ears and was mainly conductive in nature. In addition, a high-frequency sensorineural hearing loss with down-sloping curve was found in the majority of patients. Aberrant tympanometric results were recorded in 39% of the ears. In 85% of ears with a Type A or C tympanometric peak, ARs were absent. A DPOAE response in at least 6 frequencies was present in only 23% of the ears with a hearing threshold ≤30 dB HL. In patients above 14 years of age, there was a significantly lower percentage of measurable DPOAEs.
Conclusion
Hearing loss in 22q11.2 deletion syndrome is highly prevalent and both conductive and high-frequency sensorineural in nature. The age-dependent absence of DPOAEs in 22q11.2 deletion syndrome suggests cochlear damage underlying the high-frequency hearing loss.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJDBB
via IFTTT

Continuous Performance Tasks: Not Just About Sustaining Attention

Purpose
Continuous performance tasks (CPTs) are used to measure individual differences in sustained attention. Many different stimuli have been used as response targets without consideration of their impact on task performance. Here, we compared CPT performance in typically developing adults and children to assess the role of stimulus processing on error rates and reaction times.
Method
Participants completed a CPT that was based on response to infrequent targets, while monitoring and withholding responses to regular nontargets. Performance on 3 stimulus conditions was compared: visual letters (X and O), their auditory analogs, and auditory pure tones.
Results
Adults showed no difference in error propensity across the 3 conditions but had slower reaction times for auditory stimuli. Children had slower overall reaction times. They responded most quickly to the visual target and most slowly to the tone target. They also made more errors in the tone condition than in either the visual or the auditory spoken CPT conditions.
Conclusions
The results suggest error propensity and reaction time variations on CPTs cannot solely be interpreted as evidence of inattention. They also reflect stimulus-specific influences that must be considered when testing hypotheses about modality-specific deficits in sustained attention in populations with different developmental disorders.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QJ8rk
via IFTTT

Story Goodness in Adolescents With Autism Spectrum Disorder (ASD) and in Optimal Outcomes From ASD

Purpose
This study examined narrative quality of adolescents with autism spectrum disorder (ASD) using a well-studied “story goodness” coding system.
Method
Narrative samples were analyzed for distinct aspects of story goodness and rated by naïve readers on dimensions of story goodness, accuracy, cohesiveness, and oddness. Adolescents with high-functioning ASD were compared with adolescents with typical development (TD; n = 15 per group). A second study compared narratives from adolescents across three groups: ASD, TD, and youths with “optimal outcomes,” who were diagnosed with ASD early in development but no longer meet criteria for ASD and have typical behavioral functioning.
Results
In both studies, the ASD group's narratives had lower composite quality scores compared with peers with typical development. In Study 2, narratives from the optimal outcomes group were intermediate in scores and did not differ significantly from those of either other group. However, naïve raters were able to detect qualitative narrative differences across groups.
Conclusions
Findings indicate that pragmatic deficits in ASD are salient and could have clinical relevance. Furthermore, results indicate subtle differences in pragmatic language skills for individuals with optimal outcomes despite otherwise typical language skills in other domains. These results highlight the need for clinical interventions tailored to the specific deficits of these populations.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AMeI
via IFTTT

Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss

Purpose
Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers.
Method
Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility.
Results
Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise.
Conclusions
We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28U86Z1
via IFTTT

The Effect of Auditory Information on Patterns of Intrusions and Reductions

Purpose
The study investigates whether auditory information affects the nature of intrusion and reduction errors in reiterated speech. These errors are hypothesized to arise as a consequence of autonomous mechanisms to stabilize movement coordination. The specific question addressed is whether this process is affected by auditory information so that it will influence the occurrence of intrusions and reductions.
Methods
Fifteen speakers produced word pairs with alternating onset consonants and identical rhymes repetitively at a normal and fast speaking rate, in masked and unmasked speech. Movement ranges of the tongue tip, tongue dorsum, and lower lip during onset consonants were retrieved from kinematic data collected with electromagnetic articulography. Reductions and intrusions were defined as statistical outliers from movement range distributions of target and nontarget articulators, respectively.
Results
Regardless of masking condition, the number of intrusions and reductions increased during the course of a trial, suggesting movement stabilization. However, compared with unmasked speech, speakers made fewer intrusions in masked speech. The number of reductions was not significantly affected.
Conclusions
Masking of auditory information resulted in fewer intrusions, suggesting that speakers were able to pay closer attention to their articulatory movements. This highlights a possible stabilizing role for proprioceptive information in speech movement coordination.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJEoV
via IFTTT

Return to Work and Social Communication Ability Following Severe Traumatic Brain Injury

Purpose
Return to competitive employment presents a major challenge to adults who survive traumatic brain injury (TBI). This study was undertaken to better understand factors that shape employment outcome by comparing the communication profiles and self-awareness of communication deficits of adults who return to and maintain employment with those who do not.
Method
Forty-six dyads (46 adults with TBI, 46 relatives) were recruited into 2 groups based on the current employment status (employed or unemployed) of participants with TBI. Groups did not differ in regard to sex, age, education, preinjury employment, injury severity, or time postinjury. The La Trobe Communication Questionnaire (Douglas, O'Flaherty, & Snow, 2000) was used to measure communication. Group comparisons on La Trobe Communication Questionnaire scores were analyzed by using mixed 2 × 2 analysis of variance (between factor: employment status; within factor: source of perception).
Results
Analysis yielded a significant group main effect (p = .002) and a significant interaction (p = .004). The employed group reported less frequent difficulties (self and relatives). Consistent with the interaction, unemployed participants perceived themselves to have less frequent difficulties than their relatives perceived, whereas employed participants reported more frequent difficulties than their relatives.
Conclusion
Communication outcome and awareness of communication deficits play an important role in reintegration to the workplace following TBI.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AQuR
via IFTTT

Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss

Purpose
Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers.
Method
Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility.
Results
Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise.
Conclusions
We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.

from #Audiology via ola Kala on Inoreader http://ift.tt/28U86Z1
via IFTTT

Treating Speech Comprehensibility in Students With Down Syndrome

Purpose
This study examined whether a particular type of therapy (Broad Target Speech Recasts, BTSR) was superior to a contrast treatment in facilitating speech comprehensibility in conversations of students with Down syndrome who began treatment with initially high verbal imitation.
Method
We randomly assigned 51 5- to 12-year-old students to either BTSR or a contrast treatment. Therapy occurred in hour-long 1-to-1 sessions in students' schools twice per week for 6 months.
Results
For students who entered treatment just above the sample average in verbal-imitation skill, BTSR was superior to the contrast treatment in facilitating the growth of speech comprehensibility in conversational samples. The number of speech recasts mediated or explained the BTSR treatment effect on speech comprehensibility.
Conclusion
Speech comprehensibility is malleable in school-age students with Down syndrome. BTSR facilitates comprehensibility in students with just above the sample average level of verbal imitation prior to treatment. Speech recasts in BTSR are largely responsible for this effect.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AXqB
via IFTTT

Repair or Violation Detection? Pre-Attentive Processing Strategies of Phonotactic Illegality Demonstrated on the Constraint of g-Deletion in German

Purpose
Effects of categorical phonotactic knowledge on pre-attentive speech processing were investigated by presenting illegal speech input that violated a phonotactic constraint in German called “g-deletion.” The present study aimed to extend previous findings of automatic processing of phonotactic violations and to investigate the role of stimulus context in triggering either an automatic phonotactic repair or a detection of the violation.
Method
The mismatch negativity event-related potential component was obtained in 2 identical cross-sectional experiments with speaker variation and 16 healthy adult participants each. Four pseudowords were used as stimuli, 3 of them phonotactically legal and 1 illegal. Stimuli were contrasted pairwise in passive oddball conditions and presented binaurally via headphones. Results were analyzed by means of mixed design analyses of variance.
Results
Phonotactically illegal stimuli were found to be processed differently compared to legal ones. Results indicate evidence for both automatic repair and detection of the phonotactic violation depending on the linguistic context the illegal stimulus was embedded in.
Conclusions
These findings corroborate notions that categorical phonotactic knowledge is activated and applied even in the absence of attention. Thus, our findings contribute to the general understanding of sublexical phonological processing and may be of use for further developing speech recognition models.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJPke
via IFTTT

Measuring Speech Comprehensibility in Students with Down Syndrome

Purpose
There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome.
Method
Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility.
Results
Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples.
Conclusion
Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AZia
via IFTTT

Does Working Memory Enhance or Interfere With Speech Fluency in Adults Who Do and Do Not Stutter? Evidence From a Dual-Task Paradigm

Purpose
The present study examined whether engaging working memory in a secondary task benefits speech fluency. Effects of dual-task conditions on speech fluency, rate, and errors were examined with respect to predictions derived from three related theoretical accounts of disfluencies.
Method
Nineteen adults who stutter and twenty adults who do not stutter participated in the study. All participants completed 2 baseline tasks: a continuous-speaking task and a working-memory (WM) task involving manipulations of domain, load, and interstimulus interval. In the dual-task portion of the experiment, participants simultaneously performed the speaking task with each unique combination of WM conditions.
Results
All speakers showed similar fluency benefits and decrements in WM accuracy as a result of dual-task conditions. Fluency effects were specific to atypical forms of disfluency and were comparable across WM-task manipulations. Changes in fluency were accompanied by reductions in speaking rate but not by corresponding changes in overt errors.
Conclusions
Findings suggest that WM contributes to disfluencies regardless of stuttering status and that engaging WM resources while speaking enhances fluency. Further research is needed to verify the cognitive mechanism involved in this effect and to determine how these findings can best inform clinical intervention.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJI8d
via IFTTT

Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images

Purpose
Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time.
Method
The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm.
Results
Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed.
Conclusion
Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies.

from #Audiology via ola Kala on Inoreader http://ift.tt/291AS6g
via IFTTT

Narratives in Two Languages: Storytelling of Bilingual Cantonese–English Preschoolers

Purpose
The aim of this study was to compare narratives generated by 4-year-old and 5-year-old children who were bilingual in English and Cantonese.
Method
The sample included 47 children (23 who were 4 years old and 24 who were 5 years old) living in Toronto, Ontario, Canada, who spoke both Cantonese and English. The participants spoke and heard predominantly Cantonese in the home. Participants generated a story in English and Cantonese by using a wordless picture book; language order was counterbalanced. Data were transcribed and coded for story grammar, morphosyntactic quality, mean length of utterance in words, and the number of different words.
Results
Repeated measures analysis of variance revealed higher story grammar scores in English than in Cantonese, but no other significant main effects of language were observed. Analyses also revealed that older children had higher story grammar, mean length of utterance in words, and morphosyntactic quality scores than younger children in both languages. Hierarchical regressions indicated that Cantonese story grammar predicted English story grammar and Cantonese microstructure predicted English microstructure. However, no correlation was observed between Cantonese and English morphosyntactic quality.
Conclusions
The results of this study have implications for speech-language pathologists who collect narratives in Cantonese and English from bilingual preschoolers. The results suggest that there is a possible transfer in narrative abilities between the two languages.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJ76u
via IFTTT

On Peer Review

Purpose
This letter briefly reviews ideas about the purpose and benefits of peer review and reaches some idealistic conclusions about the process.
Method
The author uses both literature review and meditation born of long experience.
Results
From a cynical perspective, peer review constitutes an adversarial process featuring domination of the weak by the strong and exploitation of authors and reviewers by editors and publishers, resulting in suppression of new ideas, delayed publication of important research, and bad feelings ranging from confusion to fury. More optimistically, peer review can be viewed as a system in which reviewers and editors volunteer thousands of hours to work together with authors, to the end of furthering human knowledge.
Conclusion
Editors and authors will encounter both peer-review cynics and idealists in their careers, but in the author's experience the second are far more prevalent. Reviewers and editors can help increase the positive benefits of peer review (and improve the culture of science) by viewing the system as one in which they work with authors on behalf of high-quality publications and better science. Authors can contribute by preparing papers carefully prior to submission and by interpreting reviewers' and editors' suggestions in this collegial spirit, however difficult this may be in some cases.

from #Audiology via ola Kala on Inoreader http://ift.tt/28T3gdA
via IFTTT

The Use of Voice Cues for Speaker Gender Recognition in Cochlear Implant Recipients

Purpose
The focus of this study was to examine the influence of fundamental frequency (F0) and vocal tract length (VTL) modifications on speaker gender recognition in cochlear implant (CI) recipients for different stimulus types.
Method
Single words and sentences were manipulated using isolated or combined F0 and VTL cues. Using an 11-point rating scale, CI recipients and listeners with normal hearing rated the maleness/femaleness of the corresponding voice.
Results
Speaker gender ratings for combined F0 and VTL modifications were similar across all stimulus types in both CI recipients and listeners with normal hearing, although the CI recipients showed a somewhat larger ambiguity. In contrast to listeners with normal hearing, F0-VTL and F0-only modifications revealed similar ratings in the CI recipients when using words as stimuli. However, when sentences were used, a difference was found between F0-VTL–based and F0-based ratings. Modifying VTL cues alone did not affect ratings in the CI group.
Conclusions
Whereas speaker gender ratings by listeners with normal hearing relied on combined VTL and F0 cues, CI recipients made only limited use of VTL cues, which might be one reason behind problems with identifying the speaker on the basis of voice. However, use of the voice cues depended on stimulus type, with the greater information in sentences allowing a more detailed analysis than single words in both listener groups.

from #Audiology via ola Kala on Inoreader http://ift.tt/291zocb
via IFTTT

Embedded Instruction Improves Vocabulary Learning During Automated Storybook Reading Among High-Risk Preschoolers

Purpose
We investigated a small-group intervention designed to teach vocabulary and comprehension skills to preschoolers who were at risk for language and reading disabilities. These language skills are important and reliable predictors of later academic achievement.
Method
Preschoolers heard prerecorded stories 3 times per week over the course of a school year. A cluster randomized design was used to evaluate the effects of hearing storybooks with and without embedded vocabulary and comprehension lessons. A total of 32 classrooms were randomly assigned to experimental and comparison conditions. Approximately 6 children per classroom demonstrating low vocabulary knowledge, totaling 195 children, were enrolled.
Results
Preschoolers in the comparison condition did not learn novel, challenging vocabulary words to which they were exposed in story contexts, whereas preschoolers receiving embedded lessons demonstrated significant learning gains, although vocabulary learning diminished over the course of the school year. Modest gains in comprehension skills did not differ between the two groups.
Conclusion
The Story Friends curriculum appears to be highly feasible for delivery in early childhood educational settings and effective at teaching challenging vocabulary to high-risk preschoolers.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJ8Yl
via IFTTT

On Older Listeners' Ability to Perceive Dynamic Pitch

Purpose
Natural speech comes with variation in pitch, which serves as an important cue for speech recognition. The present study investigated older listeners' dynamic pitch perception with a focus on interindividual variability. In particular, we asked whether some of the older listeners' inability to perceive dynamic pitch stems from the higher susceptibility to the interference from formant changes.
Method
A total of 22 older listeners and 21 younger controls with at least near-typical hearing were tested on dynamic pitch identification and discrimination tasks using synthetic monophthong and diphthong vowels.
Results
The older listeners' ability to detect changes in pitch varied substantially, even when musical and linguistic experiences were controlled. The influence of formant patterns on dynamic pitch perception was evident in both groups of listeners. Overall, strong pitch contours (i.e., more dynamic) were perceived better than weak pitch contours (i.e., more monotonic), particularly with rising pitch patterns.
Conclusions
The findings are in accordance with the literature demonstrating some older individuals' difficulty perceiving dynamic pitch cues in speech. Moreover, they suggest that this problem may be prominent when the dynamic pitch is carried by natural speech and when the pitch contour is not strong.

from #Audiology via ola Kala on Inoreader http://ift.tt/291zvVf
via IFTTT

Continuous Performance Tasks: Not Just About Sustaining Attention

Purpose
Continuous performance tasks (CPTs) are used to measure individual differences in sustained attention. Many different stimuli have been used as response targets without consideration of their impact on task performance. Here, we compared CPT performance in typically developing adults and children to assess the role of stimulus processing on error rates and reaction times.
Method
Participants completed a CPT that was based on response to infrequent targets, while monitoring and withholding responses to regular nontargets. Performance on 3 stimulus conditions was compared: visual letters (X and O), their auditory analogs, and auditory pure tones.
Results
Adults showed no difference in error propensity across the 3 conditions but had slower reaction times for auditory stimuli. Children had slower overall reaction times. They responded most quickly to the visual target and most slowly to the tone target. They also made more errors in the tone condition than in either the visual or the auditory spoken CPT conditions.
Conclusions
The results suggest error propensity and reaction time variations on CPTs cannot solely be interpreted as evidence of inattention. They also reflect stimulus-specific influences that must be considered when testing hypotheses about modality-specific deficits in sustained attention in populations with different developmental disorders.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QJ8rk
via IFTTT

Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss

Purpose
Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers.
Method
Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility.
Results
Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise.
Conclusions
We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.

from #Audiology via ola Kala on Inoreader http://ift.tt/28U86Z1
via IFTTT

Emotional Diathesis, Emotional Stress, and Childhood Stuttering

Purpose
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal).
Method
Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips.
Results
CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips.
Conclusions
Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QFXxn
via IFTTT

Emotional Diathesis, Emotional Stress, and Childhood Stuttering

Purpose
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal).
Method
Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips.
Results
CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips.
Conclusions
Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28QFXxn
via IFTTT

Emotional Diathesis, Emotional Stress, and Childhood Stuttering

Purpose
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal).
Method
Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips.
Results
CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips.
Conclusions
Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QFXxn
via IFTTT

Emotional Diathesis, Emotional Stress, and Childhood Stuttering

Purpose
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal).
Method
Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips.
Results
CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips.
Conclusions
Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QFXxn
via IFTTT

Wideband acoustic activation and detection of droplet vaporization events using a capacitive micromachined ultrasonic transducer

cm_sbs_024_plain.png

An ongoing challenge exists in understanding and optimizing the acoustic droplet vaporization (ADV) process to enhance contrast agent effectiveness for biomedical applications. Acoustic signatures from vaporization events can be identified and differentiated from microbubble or tissue signals based on their frequency content. The present study exploited the wide bandwidth of a 128-element capacitive micromachined ultrasonic transducer (CMUT) array for activation (8 MHz) and real-time imaging (1 MHz) of ADV events from droplets circulating in a tube. Compared to a commercial piezoelectric probe, the CMUT array provides a substantial increase of the contrast-to-noise ratio.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28Z29pS
via IFTTT

The mechanisms of subharmonic tone generation in a synthetic larynx model

cm_sbs_024_plain.png

The sound spectra obtained in a synthetic larynx exhibited subharmonic tones that are characteristic for diplophonia. Although the generation of subharmonics is commonly associated with asymmetrically oscillating vocal folds, the synthetic elastic vocal folds showed symmetrical oscillations. The amplitudes of the subharmonics decreased with an increasing lateral diameter of the supraglottal channel, which indicates a strong dependence of the supraglottal boundary conditions. Investigations of the supraglottal flow field revealed small cycle-to-cycle variations of the static pressure in the region of the pulsatile glottal jet as the origin of the first subharmonic tone. It is located at half the fundamental frequency of the vocal fold oscillation. A principle component analysis of the supraglottal flow field with the fully developed glottal jet revealed a large recirculation area in the second spatial eigenvector which deflected the glottal jet slightly in a perpendicular direction of the jet axis. The rotation direction of the recirculation area changed with different oscillation cycles between clockwise and counterclockwise. As both directions were uniformly distributed across all acquired oscillation cycles, a cycle-wise change can be assumed. It is concluded that acoustic subharmonics are generated by small fluctuations of the glottal jet location favored by small lateral diameters of the supraglottal channel.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28RqJ0e
via IFTTT

On the method of Hunt’s parameter calibration

Publication date: Available online 23 June 2016
Source:Hearing Research
Author(s): Noori Kim, Jont Allen
This note comments on the observations of Bernier et al. (2016) regarding errors in Appendix A of Kim and Allen (2013). We acknowledge that the equations in the Appendix are in error, but wish to point out that these equations were not actually used for our analysis. We appreciate their effort in pointing out the errors, and offering corrected equations.



from #Audiology via ola Kala on Inoreader http://ift.tt/28SA7jO
via IFTTT

Effectiveness of nonporous windscreens for infrasonic measurements

cm_sbs_024_plain.png

This paper deals with nonporous windscreens used for reducing noise in infrasonic measurements. A model of sound transmission using a modal approach is derived. The system is a square plate coupled with a cavity. The model agrees with finite element simulations and measurements performed on two windscreens: a cubic windscreen using a material recommended by Shams, Zuckerwar, and Sealey [J. Acoust. Soc. Am. 118, 1335–1340 (2005)] and an optimized flat windscreen made out of aluminum. Only the latter was found to couple acoustical waves below 10 Hz without any attenuation. Moreover, wind noise reduction measurements show that nonporous windscreens perform similarly as a pipe array by averaging the pressure fluctuations. These results question the assumptions of Shams et al. and Zuckerwar [J. Acoust. Soc. Am. 127, 3327–3334 (2010)] about compact nonporous windscreens design and effectiveness.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28Z2clF
via IFTTT

Near-field/far-field array manifold of an acoustic vector-sensor near a reflecting boundary

cm_sbs_024_plain.png

The acoustic vector-sensor (a.k.a. the vector hydrophone) is a practical and versatile sound-measurement device, with applications in-room, open-air, or underwater. It consists of three identical uni-axial velocity-sensors in orthogonal orientations, plus a pressure-sensor—all in spatial collocation. Its far-field array manifold [Nehorai and Paldi (1994). IEEE Trans. Signal Process. 42, 2481–2491; Hawkes and Nehorai (2000). IEEE Trans. Signal Process. 48, 2981–2993] has been introduced into the technical field of signal processing about 2 decades ago, and many direction-finding algorithms have since been developed for this acoustic vector-sensor. The above array manifold is subsequently generalized for outside the far field in Wu, Wong, and Lau [(2010). IEEE Trans. Signal Process. 58, 3946–3951], but only if no reflection-boundary is to lie near the acoustic vector-sensor. As for the near-boundary array manifold for the general case of an emitter in the geometric near field, the far field, or anywhere in between—this paper derives and presents that array manifold in terms of signal-processing mathematics. Also derived here is the corresponding Cramér-Rao bound for azimuth-elevation-distance localization of an incident emitter, with the reflected wave shown to play a critical role on account of its constructive or destructive summation with the line-of-sight wave. The implications on source localization are explored, especially with respect to measurement model mismatch in maximum-likelihood direction finding and with regard to the spatial resolution between coexisting emitters.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28RqORq
via IFTTT

A model of ultrasound-enhanced diffusion in a biofilm

cm_sbs_024_plain.png

A stochastic model is presented for nanoparticle transport in a biofilm to explain how the combination of acoustic oscillations and intermittent retention due to interaction with the pore walls of the biofilm leads to diffusion enhancement. An expression for the effective diffusion coefficient was derived that varies with the square of the oscillation velocity amplitude. This expression was validated by comparison of an analytical diffusion solution to the stochastic model prediction. The stochastic model was applied to an example problem associated with liposome penetration into a hydrogel, and it was found to yield solutions in which liposome concentration varied exponentially with distance into the biofilm.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28YSsaI
via IFTTT

Underwater sound of rigid-hulled inflatable boats

Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10–48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10–56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28Pq7pd
via IFTTT

English-speaking preschoolers can use phrasal prosody for syntactic parsing

cm_sbs_024_plain.png

This study tested American preschoolers' ability to use phrasal prosody to constrain their syntactic analysis of locally ambiguous sentences containing noun/verb homophones (e.g., [The baby flies] [hide in the shadows] vs [The baby] [flies his kite], brackets indicate prosodic boundaries). The words following the homophone were masked, such that prosodic cues were the only disambiguating information. In an oral completion task, 4- to 5-year-olds successfully exploited the sentence's prosodic structure to assign the appropriate syntactic category to the target word, mirroring previous results in French (but challenging previous English-language results) and providing cross-linguistic evidence for the role of phrasal prosody in children's syntactic analysis.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28YS3Fr
via IFTTT

Modeling listener perception of speaker similarity in dysarthria

cm_sbs_024_plain.png

The current investigation contributes to a perceptual similarity-based approach to dysarthria characterization by utilizing an innovative statistical approach, multinomial logistic regression with sparsity constraints, to identify acoustic features underlying each listener's impressions of speaker similarity. The data-driven approach also permitted an examination of the effect of clinical experience on listeners' impressions of similarity. Listeners, irrespective of level of clinical experience, were found to rely on similar acoustic features during the perceptual sorting task, known as free classification. Overall, the results support the continued advancement of a similarity-based approach to characterizing the communication disorders associated with dysarthria.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28Pq3Wo
via IFTTT

On the method of Hunt’s parameter calibration

Publication date: Available online 23 June 2016
Source:Hearing Research
Author(s): Noori Kim, Jont Allen
This note comments on the observations of Bernier et al. (2016) regarding errors in Appendix A of Kim and Allen (2013). We acknowledge that the equations in the Appendix are in error, but wish to point out that these equations were not actually used for our analysis. We appreciate their effort in pointing out the errors, and offering corrected equations.



from #Audiology via ola Kala on Inoreader http://ift.tt/28SA7jO
via IFTTT

On the method of Hunt’s parameter calibration

S03785955.gif

Publication date: Available online 23 June 2016
Source:Hearing Research
Author(s): Noori Kim, Jont Allen
This note comments on the observations of Bernier et al. (2016) regarding errors in Appendix A of Kim and Allen (2013). We acknowledge that the equations in the Appendix are in error, but wish to point out that these equations were not actually used for our analysis. We appreciate their effort in pointing out the errors, and offering corrected equations.



from #Audiology via ola Kala on Inoreader http://ift.tt/28SA7jO
via IFTTT

On the method of Hunt’s parameter calibration

S03785955.gif

Publication date: Available online 23 June 2016
Source:Hearing Research
Author(s): Noori Kim, Jont Allen
This note comments on the observations of Bernier et al. (2016) regarding errors in Appendix A of Kim and Allen (2013). We acknowledge that the equations in the Appendix are in error, but wish to point out that these equations were not actually used for our analysis. We appreciate their effort in pointing out the errors, and offering corrected equations.



from #Audiology via ola Kala on Inoreader http://ift.tt/28SA7jO
via IFTTT

On the method of Hunt’s parameter calibration

S03785955.gif

Publication date: Available online 23 June 2016
Source:Hearing Research
Author(s): Noori Kim, Jont Allen
This note comments on the observations of Bernier et al. (2016) regarding errors in Appendix A of Kim and Allen (2013). We acknowledge that the equations in the Appendix are in error, but wish to point out that these equations were not actually used for our analysis. We appreciate their effort in pointing out the errors, and offering corrected equations.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28SA7jO
via IFTTT

A model of ultrasound-enhanced diffusion in a biofilm

A stochastic model is presented for nanoparticle transport in a biofilm to explain how the combination of acoustic oscillations and intermittent retention due to interaction with the pore walls of the biofilm leads to diffusion enhancement. An expression for the effective diffusion coefficient was derived that varies with the square of the oscillation velocity amplitude. This expression was validated by comparison of an analytical diffusion solution to the stochastic model prediction. The stochastic model was applied to an example problem associated with liposome penetration into a hydrogel, and it was found to yield solutions in which liposome concentration varied exponentially with distance into the biofilm.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28YSsaI
via IFTTT

Underwater sound of rigid-hulled inflatable boats

Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10–48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10–56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28Pq7pd
via IFTTT

English-speaking preschoolers can use phrasal prosody for syntactic parsing

This study tested American preschoolers' ability to use phrasal prosody to constrain their syntactic analysis of locally ambiguous sentences containing noun/verb homophones (e.g., [The baby flies] [hide in the shadows] vs [The baby] [flies his kite], brackets indicate prosodic boundaries). The words following the homophone were masked, such that prosodic cues were the only disambiguating information. In an oral completion task, 4- to 5-year-olds successfully exploited the sentence's prosodic structure to assign the appropriate syntactic category to the target word, mirroring previous results in French (but challenging previous English-language results) and providing cross-linguistic evidence for the role of phrasal prosody in children's syntactic analysis.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28YS3Fr
via IFTTT

Modeling listener perception of speaker similarity in dysarthria

The current investigation contributes to a perceptual similarity-based approach to dysarthria characterization by utilizing an innovative statistical approach, multinomial logistic regression with sparsity constraints, to identify acoustic features underlying each listener's impressions of speaker similarity. The data-driven approach also permitted an examination of the effect of clinical experience on listeners' impressions of similarity. Listeners, irrespective of level of clinical experience, were found to rely on similar acoustic features during the perceptual sorting task, known as free classification. Overall, the results support the continued advancement of a similarity-based approach to characterizing the communication disorders associated with dysarthria.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/28Pq3Wo
via IFTTT

Cochlear implantation: Optimizing outcomes through evidence-based clinical decisions.

Cochlear implantation: Optimizing outcomes through evidence-based clinical decisions.

Int J Audiol. 2016;55 Suppl 2:S1-2

Authors: Dowell R, Galvin K, Cowan R

PMID: 27329573 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/28PLIeR
via IFTTT

Audiology patient fall statistics and risk factors compared to non-audiology patients.

Audiology patient fall statistics and risk factors compared to non-audiology patients.

Int J Audiol. 2016 Jun 22;:1-7

Authors: Criter RE, Honaker JA

Abstract
OBJECTIVE: To compare fall statistics (e.g. incidence, prevalence), fall risks, and characteristics of patients who seek hearing healthcare from an audiologist to individuals who have not sought such services.
DESIGN: Case-control study.
STUDY SAMPLE: Two groups of community-dwelling older adult patients: 25 audiology patients aged 60 years or older (M age: 69.2 years, SD: 4.5, range: 61-77) and a control group (gender- and age-matched ±2 years) of 25 non-audiology patients (M age: 69.6, SD: 4.7, range: 60-77).
RESULTS: Annual incidence of falls (most recent 12 months) was higher in audiology patients (68.0%) than non-audiology patients (28.0%; p = .005). Audiology patients reported a higher incidence of multiple recent falls (p =.025) and more chronic health conditions (p = .028) than non-audiology patients.
CONCLUSIONS: Significantly more audiology patients fall on an annual basis than non-audiology patients, suggesting that falls are a pervasive issue in general hearing clinics. Further action on the part of healthcare professionals providing audiologic services may be necessary to identify individuals at risk for falling.

PMID: 27329486 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/28S9jQz
via IFTTT