Σάββατο 27 Αυγούστου 2016

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Int J Audiol. 2016 Aug 26;:1-8

Authors: Mao Y, Xu L

Abstract
OBJECTIVE: The purpose of the present study was to investigate Mandarin tone recognition in background noise in children with cochlear implants (CIs), and to examine the potential factors contributing to their performance.
DESIGN: Tone recognition was tested using a two-alternative forced-choice paradigm in various signal-to-noise ratio (SNR) conditions (i.e. quiet, +12, +6, 0, and -6 dB). Linear correlation analysis was performed to examine possible relationships between the tone-recognition performance of the CI children and the demographic factors.
STUDY SAMPLE: Sixty-six prelingually deafened children with CIs and 52 normal-hearing (NH) children as controls participated in the study.
RESULTS: Children with CIs showed an overall poorer tone-recognition performance and were more susceptible to noise than their NH peers. Tone confusions between Mandarin tone 2 and tone 3 were most prominent in both CI and NH children except for in the poorest SNR conditions. Age at implantation was significantly correlated with tone-recognition performance of the CI children in noise.
CONCLUSIONS: There is a marked deficit in tone recognition in prelingually deafened children with CIs, particularly in noise listening conditions. While factors that contribute to the large individual differences are still elusive, early implantation could be beneficial to tone development in pediatric CI users.

PMID: 27564095 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bHKTZr
via IFTTT

Corrigendum.

Corrigendum.

Int J Audiol. 2016 Aug 26;:1

Authors:

PMID: 27561903 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bzj6bV
via IFTTT

Semantic Processing in Deaf and Hard-of-Hearing Children: Large N400 Mismatch Effects in Brain Responses, Despite Poor Semantic Ability.

Semantic Processing in Deaf and Hard-of-Hearing Children: Large N400 Mismatch Effects in Brain Responses, Despite Poor Semantic Ability.

Front Psychol. 2016;7:1146

Authors: Kallioinen P, Olofsson J, Nakeva von Mentzer C, Lindgren M, Ors M, Sahlén BS, Lyxell B, Engström E, Uhlén I

Abstract
Difficulties in auditory and phonological processing affect semantic processing in speech comprehension for deaf and hard-of-hearing (DHH) children. However, little is known about brain responses related to semantic processing in this group. We investigated event-related potentials (ERPs) in DHH children with cochlear implants (CIs) and/or hearing aids (HAs), and in normally hearing controls (NH). We used a semantic priming task with spoken word primes followed by picture targets. In both DHH children and controls, cortical response differences between matching and mismatching targets revealed a typical N400 effect associated with semantic processing. Children with CI had the largest mismatch response despite poor semantic abilities overall; Children with CI also had the largest ERP differentiation between mismatch types, with small effects in within-category mismatch trials (target from same category as prime) and large effects in between-category mismatch trials (where target is from a different category than prime), compared to matching trials. Children with NH and HA had similar responses to both mismatch types. While the large and differentiated ERP responses in the CI group were unexpected and should be interpreted with caution, the results could reflect less precision in semantic processing among children with CI, or a stronger reliance on predictive processing.

PMID: 27559320 [PubMed]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bWMRmP
via IFTTT

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Int J Audiol. 2016 Aug 26;:1-8

Authors: Mao Y, Xu L

Abstract
OBJECTIVE: The purpose of the present study was to investigate Mandarin tone recognition in background noise in children with cochlear implants (CIs), and to examine the potential factors contributing to their performance.
DESIGN: Tone recognition was tested using a two-alternative forced-choice paradigm in various signal-to-noise ratio (SNR) conditions (i.e. quiet, +12, +6, 0, and -6 dB). Linear correlation analysis was performed to examine possible relationships between the tone-recognition performance of the CI children and the demographic factors.
STUDY SAMPLE: Sixty-six prelingually deafened children with CIs and 52 normal-hearing (NH) children as controls participated in the study.
RESULTS: Children with CIs showed an overall poorer tone-recognition performance and were more susceptible to noise than their NH peers. Tone confusions between Mandarin tone 2 and tone 3 were most prominent in both CI and NH children except for in the poorest SNR conditions. Age at implantation was significantly correlated with tone-recognition performance of the CI children in noise.
CONCLUSIONS: There is a marked deficit in tone recognition in prelingually deafened children with CIs, particularly in noise listening conditions. While factors that contribute to the large individual differences are still elusive, early implantation could be beneficial to tone development in pediatric CI users.

PMID: 27564095 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bHKTZr
via IFTTT

Corrigendum.

Corrigendum.

Int J Audiol. 2016 Aug 26;:1

Authors:

PMID: 27561903 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bzj6bV
via IFTTT

Some Interesting Facts about the Journal of the American Academy of Audiology



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBhOd
via IFTTT

Do Modern Hearing Aids Meet ANSI Standards?

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciDMt0
via IFTTT

Directional Processing and Noise Reduction in Hearing Aids: Individual and Situational Influences on Preferred Setting

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBbGz
via IFTTT

A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCPAX
via IFTTT

Manganese and Lipoflavonoid Plus® to Treat Tinnitus: A Randomized Controlled Trial

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBTUe
via IFTTT

Motivational Interviewing as an Adjunct to Hearing Rehabilitation for Patients with Tinnitus: A Randomized Controlled Pilot Trial

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCApm
via IFTTT

Validity and Reliability of the Hearing Handicap Inventory for Elderly: Version Adapted for Use on the Portuguese Population

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCG08
via IFTTT

Response to Dr. Vermiglio



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBfpz
via IFTTT

JAAA CEU Program



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciDHFE
via IFTTT

JAAA CEU Program.

Related Articles

JAAA CEU Program.

J Am Acad Audiol. 2016 Sep;27(8):684-685

Authors:

PMID: 27564447 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciDQsR
via IFTTT

Response to Dr. Vermiglio.

Related Articles

Response to Dr. Vermiglio.

J Am Acad Audiol. 2016 Sep;27(8):683

Authors: Jerger J

PMID: 27564446 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bX2Dhx
via IFTTT

Validity and Reliability of the Hearing Handicap Inventory for Elderly: Version Adapted for Use on the Portuguese Population.

Related Articles

Validity and Reliability of the Hearing Handicap Inventory for Elderly: Version Adapted for Use on the Portuguese Population.

J Am Acad Audiol. 2016 Sep;27(8):677-682

Authors: de Paiva SM, Simões J, Paiva A, Newman C, Castro E Sousa F, Bébéar JP

Abstract
BACKGROUND: The use of the Hearing Handicap Inventory for the Elderly (HHIE) questionnaire enables us to measure self-perceived psychosocial handicaps of hearing impairment in the elderly as a supplement to pure-tone audiometry. This screening instrument is widely used and it has been going through adaptations and validations for many languages; all of these versions have kept the validity and reliability of the original version.
PURPOSE: To validate the HHIE questionnaire, translated into Portuguese of Portugal, on the Portuguese population.
RESEARCH DESIGN: This study is a descriptive correlational qualitative study. The authors performed the translation from English into Portuguese, the linguistic adaptation, and the counter translation.
STUDY SAMPLE: Two hundred and sixty patients from the Ear, Nose, and Throat (ENT) Department of Coimbra University Hospitals were divided into a case group (83 individuals) and a control group (177 individuals).
INTERVENTION: All of the 260 patients completed the 25 items in the questionnaire and the answers were reviewed for completeness.
DATA COLLECTION AND ANALYSIS: The patients volunteered to answer the 25-item HHIE during an ENT appointment. Correlations between each individual item and the total score of the HHIE were tested, and demographic and clinical variables were correlated with the total score, as well. The instrument's reproducibility was assessed using the internal consistency model (Cronbach's alpha).
RESULTS: The questions were successfully understood by the participants. There was a significant difference in the HHIE-10 and HHIE-25 total scores between the two groups (p < 0.001). Positive correlations can be seen between the global question and HHIE-10 and HHIE-25. In the regression study, a relationship was observed between the pure-tone average and the HHIE-10 (p < 0.001). Reliability of the instrument was proven by a Cronbach alpha index of 0,79.
CONCLUSIONS: The HHIE translation into Portuguese of Portugal maintained the validity of the original version and it is useful to assess the psychosocial handicap of hearing impairment in the elderly.

PMID: 27564445 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciD7I5
via IFTTT

Motivational Interviewing as an Adjunct to Hearing Rehabilitation for Patients with Tinnitus: A Randomized Controlled Pilot Trial.

Related Articles

Motivational Interviewing as an Adjunct to Hearing Rehabilitation for Patients with Tinnitus: A Randomized Controlled Pilot Trial.

J Am Acad Audiol. 2016 Sep;27(8):669-676

Authors: Zarenoe R, Söderlund LL, Andersson G, Ledin T

Abstract
PURPOSE: To test the effects of a brief motivational interviewing (MI) program as an adjunct to hearing aid rehabilitation for patients with tinnitus and sensorineural hearing loss.
RESEARCH DESIGN: This was a pilot randomized controlled trial.
STUDY SAMPLE: The sample consisted of 50 patients aged between 40 and 82 yr with both tinnitus and sensorineural hearing loss and a pure-tone average (0.5, 1, 2, and 4 kHz) < 70 dB HL. All patients were first-time hearing aid users.
INTERVENTION: A brief MI program was used during hearing aid fitting in 25 patients, whereas the remainder received standard practice (SP), with conventional hearing rehabilitation.
DATA COLLECTION AND ANALYSIS: A total of 46 patients (N = 23 + 23) with tinnitus were included for further analysis. The Tinnitus Handicap Inventory (THI) and the International Outcome Inventory for Hearing Aids (IOI-HA) were administered before and after rehabilitation. THI was used to investigate changes in tinnitus annoyance, and the IOI-HA was used to determine the effect of hearing aid treatment.
RESULTS: Self-reported tinnitus disability (THI) decreased significantly in the MI group (p < 0.001) and in the SP group (p < 0.006). However, there was greater improvement in the MI group (p < 0.013). Furthermore, the findings showed a significant improvement in patients' satisfaction concerning the hearing aids (IOI-HA, within both groups; MI group, p < 0.038; and SP group, p < 0.026), with no difference between the groups (p < 0.99).
CONCLUSION: Tinnitus handicap scores decrease to a greater extent following brief MI than following SP. Future research on the value of incorporating MI into audiological rehabilitation using randomized controlled designs is required.

PMID: 27564444 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCvlJ
via IFTTT

Manganese and Lipoflavonoid Plus(®) to Treat Tinnitus: A Randomized Controlled Trial.

Related Articles

Manganese and Lipoflavonoid Plus(®) to Treat Tinnitus: A Randomized Controlled Trial.

J Am Acad Audiol. 2016 Sep;27(8):661-668

Authors: Rojas-Roncancio E, Tyler R, Jun HJ, Wang TC, Ji H, Coelho C, Witt S, Hansen MR, Gantz BJ

Abstract
BACKGROUND: Several tinnitus sufferers suggest that manganese has been helpful with their tinnitus.
PURPOSE: We tested this in a controlled experiment where participants were committed to taking manganese and Lipoflavonoid Plus(®) to treat their tinnitus.
RESEARCH DESIGN: Randomized controlled trial.
STUDY SAMPLE: 40 participants were randomized to receive both manganese and Lipoflavonoid Plus(®) for 6 months, or Lipoflavonoid Plus(®) only (as the control).
DATA COLLECTION AND ANALYSIS: Pre- and postmeasures were obtained with the Tinnitus Handicap Questionnaire, Tinnitus Primary Functions Questionnaire, and tinnitus loudness and annoyance ratings. An audiologist performed the audiogram, the tinnitus loudness match, and minimal masking level.
RESULTS: Twelve participants were dropped out of the study because of the side effects or were lost to follow-up. In the manganese group, 1 participant (out of 12) showed a decrease in the questionnaires, and another showed a decrease in the loudness and annoyance ratings. No participants from the control group (total 16) showed a decrease in the questionnaires ratings. Two participants in the control group reported a loudness decrement and one reported an annoyance decrement.
CONCLUSIONS: We were not able to conclude that either manganese or Lipoflavonoid Plus(®) is an effective treatment for tinnitus.

PMID: 27564443 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciATs9
via IFTTT

A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists.

Related Articles

A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists.

J Am Acad Audiol. 2016 Sep;27(8):647-660

Authors: Plotkowski AR, Alexander JM

Abstract
BACKGROUND: Listening in challenging situations requires explicit cognitive resources to decode and process speech. Traditional speech recognition tests are limited in documenting this cognitive effort, which may differ greatly between individuals or listening conditions despite similar scores. A sequential sentence paradigm was designed to be more sensitive to individual differences in demands on verbal processing during speech recognition.
PURPOSE: The purpose of this study was to establish the feasibility, validity, and equivalency of test materials in the sequential sentence paradigm as well as to evaluate the effects of masker type, signal-to-noise ratio (SNR), and working memory (WM) capacity on performance in the task.
RESEARCH DESIGN: Listeners heard a pair of sentences and repeated aloud the second sentence (immediate recall) and then wrote down the first sentence (delayed recall). Sentence lists were from the Perceptually Robust English Sentence Test Open-set (PRESTO) test. In experiment I, listeners completed a traditional speech recognition task. In experiment II, listeners completed the sequential sentence task at one SNR. In experiment III, the masker type (steady noise versus multitalker babble) and SNR were varied to demonstrate the effects of WM as the speech material increased in difficulty.
STUDY SAMPLE: Young, normal-hearing adults (total n = 53) from the Purdue University community completed one of the three experiments.
DATA COLLECTION AND ANALYSIS: Keyword scoring of the PRESTO lists was completed for both the immediate- and delayed-recall sentences. The Verbal Letter Monitoring task, a test of WM, was used to separate listeners into a low-WM or high-WM group.
RESULTS: Experiment I indicated that mean recognition on the single-sentence task was highly variable between the original PRESTO lists. Modest rearrangement of the sentences yielded 18 statistically equivalent lists (mean recognition = 65.0%, range = 64.4-65.7%), which were used in the sequential sentence task in experiment II. In the new test paradigm, recognition of the immediate-recall sentences was not statistically different from the single-sentence task, indicating that there were no cognitive load effects from the delayed-recall sentences. Finally, experiment III indicated that multitalker babble was equally detrimental compared to steady-state noise for immediate recall of sentences for both low- and high-WM groups. On the other hand, delayed recall of sentences in multitalker babble was disproportionately more difficult for the low-WM group compared with the high-WM group.
CONCLUSIONS: The sequential sentence paradigm is a feasible test format with mostly equivalent lists. Future studies using this paradigm may need to consider individual differences in WM to see the full range of effects across different conditions. Possible applications include testing the efficacy of various signal-processing techniques in clinical populations.

PMID: 27564442 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOHo2B
via IFTTT

Directional Processing and Noise Reduction in Hearing Aids: Individual and Situational Influences on Preferred Setting.

Related Articles

Directional Processing and Noise Reduction in Hearing Aids: Individual and Situational Influences on Preferred Setting.

J Am Acad Audiol. 2016 Sep;27(8):628-646

Authors: Neher T, Wagener KC, Fischer RL

Abstract
BACKGROUND: A better understanding of individual differences in hearing aid (HA) outcome is a prerequisite for more personalized HA fittings. Currently, knowledge of how different user factors relate to response to directional processing (DIR) and noise reduction (NR) is sparse.
PURPOSE: To extend a recent study linking preference for DIR and NR to pure-tone average hearing thresholds (PTA) and cognitive factors by investigating if (1) equivalent links exist for different types of DIR and NR, (2) self-reported noise sensitivity and personality can account for additional variability in preferred DIR and NR settings, and (3) spatial target speech configuration interacts with individual DIR preference.
RESEARCH DESIGN: Using a correlational study design, overall preference for different combinations of DIR and NR programmed into a commercial HA was assessed in a complex speech-in-noise situation and related to PTA, cognitive function, and different personality traits.
STUDY SAMPLE: Sixty experienced HA users aged 60-82 yr with controlled variation in PTA and working memory capacity took part in this study. All of them had participated in the earlier study, as part of which they were tested on a measure of "executive control" tapping into cognitive functions such as working memory, mental flexibility, and selective attention.
DATA COLLECTION AND ANALYSIS: Six HA settings based on unilateral (within-device) or bilateral (across-device) DIR combined with inactive, moderate, or strong single-microphone NR were programmed into a pair of behind-the-ear HAs together with individually prescribed amplification. Overall preference was assessed using a free-field simulation of a busy cafeteria situation with either a single frontal talker or two talkers at ±30° azimuth as the target speech. In addition, two questionnaires targeting noise sensitivity and the "Big Five" personality traits were administered. Data were analyzed using multiple regression analyses and repeated-measures analyses of variance with a focus on potential interactions between the HA settings and user factors.
RESULTS: Consistent with the earlier study, preferred HA setting was related to PTA and executive control. However, effects were weaker this time. Noise sensitivity and personality did not interact with HA settings. As expected, spatial target speech configuration influenced preference, with bilateral and unilateral DIR "winning" in the single- and two-talker scenario, respectively. In general, participants with higher PTA tended to more strongly prefer bilateral DIR than participants with lower PTA.
CONCLUSIONS: Although the current study lends some support to the view that PTA and cognitive factors affect preferred DIR and NR setting, it also indicates that these effects can vary across noise management technologies. To facilitate more personalized HA fittings, future research should investigate the source of this variability.

PMID: 27564441 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOGyTE
via IFTTT

Do Modern Hearing Aids Meet ANSI Standards?

Related Articles

Do Modern Hearing Aids Meet ANSI Standards?

J Am Acad Audiol. 2016 Sep;27(8):619-627

Authors: Holder JT, Picou EM, Gruenwald JM, Ricketts TA

Abstract
BACKGROUND: The American National Standards Institute (ANSI) provides standards used to govern standardization of all hearing aids. If hearing aids do not meet specifications, there are potential negative implications for hearing aid users, professionals, and the industry. Recent literature has not investigated the proportion of new hearing aids in compliance with the ANSI specifications for quality control standards when they arrive in the clinic before dispensing.
PURPOSE: The aims of this study were to determine the percentage of new hearing aids compliant with the relevant ANSI standard and to report trends in electroacoustic analysis data.
RESEARCH DESIGN: New hearing aids were evaluated for quality control via the ANSI S3.22-2009 standard. In addition, quality control of directional processing was also assessed.
STUDY SAMPLE: Seventy-three behind-the-ear hearing aids from four major manufacturers, that were purchased for clinical patients were evaluated before dispensing.
DATA COLLECTION AND ANALYSIS: Audioscan Verifit (version 3.1) hearing instrument fitting system was used to complete electroacoustic analysis and directional processing evaluation of the hearing aids. Frye's Fonix 8000 test box system (Fonix 8000) was also used to cross-check equivalent input noise (EIN) measurements. These measurements were then analyzed for trends across brands and specifications.
RESULTS: All of the hearing aids evaluated were found to be out of specification for at least one measure. EIN and attack and release times were the measures most frequently out of specification. EIN was found to be affected by test box isolation for two of the four brands tested. Systematic discrepancies accounted for ∼93% of the noncompliance issues, while unsystematic quality control issues accounted for the remaining 7%.
CONCLUSIONS: The high number of systematic discrepancies between the data collected and the specifications published by the manufacturers suggests there are clear issues related to the specific protocols used for quality control testing. These issues present a significant barrier for hearing aid dispensers when attempting to accurately determine if a hearing aid is functioning appropriately. The significant number of unsystematic discrepancies supports the continued importance of quality control measures of new and repaired hearing aids to ensure that the device is functioning properly before it is dispensed and to avoid future negative implications of fitting a faulty device.

PMID: 27564440 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOGTpA
via IFTTT

Some Interesting Facts about the Journal of the American Academy of Audiology.

Related Articles

Some Interesting Facts about the Journal of the American Academy of Audiology.

J Am Acad Audiol. 2016 Sep;27(8):618

Authors: McCaslin DL

PMID: 27564439 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOJaRg
via IFTTT

Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants

Purpose
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs.
Methods
Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli.
Results
Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available.
Conclusions
The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

from #Audiology via ola Kala on Inoreader http://ift.tt/29Z4ips
via IFTTT

Emotional Diathesis, Emotional Stress, and Childhood Stuttering

Purpose
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal).
Method
Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips.
Results
CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips.
Conclusions
Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QFXxn
via IFTTT

Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

Purpose
The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks.
Method
We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment.
Results
In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect.
Conclusions
The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aQ5ydF
via IFTTT

Clear Speech Variants: An Acoustic Study in Parkinson's Disease

Purpose
The authors investigated how different variants of clear speech affect segmental and suprasegmental acoustic measures of speech in speakers with Parkinson's disease and a healthy control group.
Method
A total of 14 participants with Parkinson's disease and 14 control participants served as speakers. Each speaker produced 18 different sentences selected from the Sentence Intelligibility Test (Yorkston & Beukelman, 1996). All speakers produced stimuli in 4 speaking conditions (habitual, clear, overenunciate, and hearing impaired). Segmental acoustic measures included vowel space area and first moment (M1) coefficient difference measures for consonant pairs. Second formant slope of diphthongs and measures of vowel and fricative durations were also obtained. Suprasegmental measures included fundamental frequency, sound pressure level, and articulation rate.
Results
For the majority of adjustments, all variants of clear speech instruction differed from the habitual condition. The overenunciate condition elicited the greatest magnitude of change for segmental measures (vowel space area, vowel durations) and the slowest articulation rates. The hearing impaired condition elicited the greatest fricative durations and suprasegmental adjustments (fundamental frequency, sound pressure level).
Conclusions
Findings have implications for a model of speech production for healthy speakers as well as for speakers with dysarthria. Findings also suggest that particular clear speech instructions may target distinct speech subsystems.

from #Audiology via ola Kala on Inoreader http://ift.tt/28T3ph6
via IFTTT

New Directions for Auditory Training: Introduction

Purpose
The purpose of this research forum article is to provide an overview of a collection of invited articles on contemporary issues in auditory training.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bqafrr
via IFTTT

Prevalence and Predictors of Persistent Speech Sound Disorder at Eight Years Old: Findings From a Population Cohort Study

Purpose
The purpose of this study was to determine prevalence and predictors of persistent speech sound disorder (SSD) in children aged 8 years after disregarding children presenting solely with common clinical distortions (i.e., residual errors).
Method
Data from the Avon Longitudinal Study of Parents and Children (Boyd et al., 2012) were used. Children were classified as having persistent SSD on the basis of percentage of consonants correct measures from connected speech samples. Multivariable logistic regression analyses were performed to identify predictors.
Results
The estimated prevalence of persistent SSD was 3.6%. Children with persistent SSD were more likely to be boys and from families who were not homeowners. Early childhood predictors identified as important were weak sucking at 4 weeks, not often combining words at 24 months, limited use of word morphology at 38 months, and being unintelligible to strangers at age 38 months. School-age predictors identified as important were maternal report of difficulty pronouncing certain sounds and hearing impairment at age 7 years, tympanostomy tube insertion at any age up to 8 years, and a history of suspected coordination problems. The contribution of these findings to our understanding of risk factors for persistent SSD and the nature of the condition is considered.
Conclusion
Variables identified as predictive of persistent SSD suggest that factors across motor, cognitive, and linguistic processes may place a child at risk.

from #Audiology via ola Kala on Inoreader http://ift.tt/296H4hB
via IFTTT

Spontaneous Gesture Production and Lexical Abilities in Children With Specific Language Impairment in a Naming Task

Purpose
The purpose of the study was to investigate the role that cospeech gestures play in lexical production in preschool-age children with expressive specific language impairment (E-SLI).
Method
Fifteen preschoolers with E-SLI and 2 groups of typically developing (TD) children matched for chronological age (n = 15, CATD group) and for language abilities (n = 15, LATD group) completed a picture-naming task. The accuracy of the spoken answers (coded for types of correct and incorrect answers), the modality of expression (spoken and/or gestural), types of gestures, and semantic relationship between gestures and speech produced by children in the different groups were compared.
Results
Children with SLI produced higher rates of phonological simplifications and unintelligible answers than CATD children, but lower rates of semantic errors than LATD children. They did not show a significant preference for spoken answers, as TD children did. Similarly to LATD children, they used gestures at higher rates than CATD, both deictic and representational, and both reinforcing the information conveyed in speech and adding correct information to co-occurring speech.
Conclusions
These findings support the hypotheses that children with SLI rely on gestures for scaffolding their speech and do not have a clear preference for the spoken modality, as TD children do, and have implications for educational and clinical practice.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aMZLFE
via IFTTT

Evidence That Bimanual Motor Timing Performance Is Not a Significant Factor in Developmental Stuttering

Purpose
Stuttering involves a breakdown in the speech motor system. We address whether stuttering in its early stage is specific to the speech motor system or whether its impact is observable across motor systems.
Method
As an extension of Olander, Smith, and Zelaznik (2010), we measured bimanual motor timing performance in 115 children: 70 children who stutter (CWS) and 45 children who do not stutter (CWNS). The children repeated the clapping task yearly for up to 5 years. We used a synchronization-continuation rhythmic timing paradigm. Two analyses were completed: a cross-sectional analysis of data from the children in the initial year of the study (ages 4;0 [years;months] to 5;11) compared clapping performance between CWS and CWNS. A second, multiyear analysis assessed clapping behavior across the ages 3;5–9;5 to examine any potential relationship between clapping performance and eventual persistence or recovery of stuttering.
Results
Preschool CWS were not different from CWNS on rates of clapping or variability in interclap interval. In addition, no relationship was found between bimanual motor timing performance and eventual persistence in or recovery from stuttering. The disparity between the present findings for preschoolers and those of Olander et al. (2010) most likely arises from the smaller sample size used in the earlier study.
Conclusion
From the current findings, on the basis of data from relatively large samples of stuttering and nonstuttering children tested over multiple years, we conclude that a bimanual motor timing deficit is not a core feature of early developmental stuttering.

from #Audiology via ola Kala on Inoreader http://ift.tt/29wyc2L
via IFTTT

Initial Stop Voicing in Bilingual Children With Cochlear Implants and Their Typically Developing Peers With Normal Hearing

Purpose
This study focuses on stop voicing differentiation in bilingual children with normal hearing (NH) and their bilingual peers with hearing loss who use cochlear implants (CIs).
Method
Twenty-two bilingual children participated in our study (11 with NH, M age = 5;1 [years;months], and 11 with CIs, M hearing age = 5;1). The groups were matched on hearing age and a range of demographic variables. Single-word picture elicitation was used with word-initial singleton stop consonants. Repeated measures analyses of variance with three within-subject factors (language, stop voicing, and stop place of articulation) and one between-subjects factor (NH vs. CI user) were conducted with voice onset time and percentage of prevoiced stops as dependent variables.
Results
Main effects were statistically significant for language, stop voicing, and stop place of articulation on both voice onset time and prevoicing. There were no significant main effects for NH versus CI groups. Both children with NH and with CIs differentiated stop voicing in their languages and by stop place of articulation. Stop voicing differentiation was commensurate across the groups of children with NH versus CIs.
Conclusions
Stop voicing differentiation is accomplished in a similar fashion by bilingual children with NH and CIs, and both groups differentiate stop voicing in a language-specific fashion.

from #Audiology via ola Kala on Inoreader http://ift.tt/297Iplo
via IFTTT

Auditory Training With Frequent Communication Partners

Purpose
Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)—speech they most likely desire to recognize—under the assumption that familiarity with the FCP's speech limits potential gains. This study determined whether auditory training with the speech of an individual's FCP, in this case their spouse, would lead to enhanced recognition of their spouse's speech.
Method
Ten couples completed a 6-week computerized auditory training program in which the spouse recorded the stimuli and the participant (partner with hearing loss) completed auditory training that presented recordings of their spouse.
Results
Training led participants to better discriminate their FCP's speech. Responses on the Client Oriented Scale of Improvement (Dillon, James, & Ginis, 1997) indicated subjectively that training reduced participants' communication difficulties. Peformance on a word identification task did not change.
Conclusions
Results suggest that auditory training might improve the ability of older participants with hearing loss to recognize the speech of their spouse and might improve communication interactions between couples. The results support a task-appropriate processing framework of learning, which assumes that human learning depends on the degree of similarity between training tasks and desired outcomes.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bqa7Z9
via IFTTT