Παρασκευή 13 Ιανουαρίου 2017

Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

wk-health-logo.gif

Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jguIa0
via IFTTT

Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants.

wk-health-logo.gif

Objectives: The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Design: Thirteen children (mean age +/- SD = 10 +/- 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90[degrees] azimuth to +90[degrees] azimuth (15[degrees] interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0[degrees]), from one of the two side speakers (+90[degrees] or -90[degrees]), from both side speakers (+/-90[degrees]). Speech, spatial, and quality questionnaire was administered. Results: When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Conclusions: Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jG0J7Q
via IFTTT

Using Neural Response Telemetry to Monitor Physiological Responses to Acoustic Stimulation in Hybrid Cochlear Implant Users.

wk-health-logo.gif

Objective: This report describes the results of a series of experiments where we use the neural response telemetry (NRT) system of the Nucleus cochlear implant (CI) to measure the response of the peripheral auditory system to acoustic stimulation in Nucleus Hybrid CI users. The objectives of this study were to determine whether they could separate responses from hair cells and neurons and to evaluate the stability of these measures over time. Design: Forty-four CI users participated. They all had residual acoustic hearing and used a Nucleus Hybrid S8, S12, or L24 CI or the standard lateral wall CI422 implant. The NRT system of the CI was used to trigger an acoustic stimulus (500-Hz tone burst or click), which was presented at a low stimulation rate (10, 15, or 50 per second) to the implanted ear via an insert earphone and to record the cochlear microphonic, the auditory nerve neurophonic and the compound action potential (CAP) from an apical intracochlear electrode. To record acoustically evoked responses, a longer time window than is available with the commercial NRT software is required. This limitation was circumvented by making multiple recordings for each stimulus using different time delays between the onset of stimulation and the onset of averaging. These recordings were then concatenated off-line. Matched recordings elicited using positive and negative polarity stimuli were added off-line to emphasize neural potentials (SUM) and subtracted off-line to emphasize potentials primarily generated by cochlear hair cells (DIF). These assumptions regarding the origin of the SUM and DIF components were tested by comparing the magnitude of these derived responses recorded using various stimulation rates. Magnitudes of the SUM and DIF components were compared with each other and with behavioral thresholds. Results: SUM and DIF components were identified for most subjects, consistent with both hair cell and neural responses to acoustic stimulation. For a subset of the study participants, the DIF components grew as stimulus level was increased, but little or no SUM components were identified. Latency of the CAPs in response to click stimuli was long relative to reports in the literature of recordings obtained using extracochlear electrodes. This difference in response latency and general morphology of the CAPs recorded was likely due to differences across subjects in hearing loss configuration. The use of high stimulation rates tended to decrease SUM and CAP components more than DIF components. We suggest this effect reflects neural adaptation. In some individuals, repeated measures were made over intervals as long as 9 months. Changes over time in DIF, SUM, and CAP thresholds mirrored changes in audiometric threshold for the subjects who experienced loss of acoustic hearing in the implanted ear. Conclusions: The Nucleus NRT software can be used to record peripheral responses to acoustic stimulation at threshold and suprathreshold levels, providing a window into the status of the auditory hair cells and the primary afferent nerve fibers. These acoustically evoked responses are sensitive to changes in hearing status and consequently could be useful in characterizing the specific pathophysiology of the hearing loss experienced by this population of CI users. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jgD0yF
via IFTTT

Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

wk-health-logo.gif

Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jguIa0
via IFTTT

Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants.

Objectives: The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Design: Thirteen children (mean age +/- SD = 10 +/- 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90[degrees] azimuth to +90[degrees] azimuth (15[degrees] interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0[degrees]), from one of the two side speakers (+90[degrees] or -90[degrees]), from both side speakers (+/-90[degrees]). Speech, spatial, and quality questionnaire was administered. Results: When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Conclusions: Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jG0J7Q
via IFTTT

Using Neural Response Telemetry to Monitor Physiological Responses to Acoustic Stimulation in Hybrid Cochlear Implant Users.

Objective: This report describes the results of a series of experiments where we use the neural response telemetry (NRT) system of the Nucleus cochlear implant (CI) to measure the response of the peripheral auditory system to acoustic stimulation in Nucleus Hybrid CI users. The objectives of this study were to determine whether they could separate responses from hair cells and neurons and to evaluate the stability of these measures over time. Design: Forty-four CI users participated. They all had residual acoustic hearing and used a Nucleus Hybrid S8, S12, or L24 CI or the standard lateral wall CI422 implant. The NRT system of the CI was used to trigger an acoustic stimulus (500-Hz tone burst or click), which was presented at a low stimulation rate (10, 15, or 50 per second) to the implanted ear via an insert earphone and to record the cochlear microphonic, the auditory nerve neurophonic and the compound action potential (CAP) from an apical intracochlear electrode. To record acoustically evoked responses, a longer time window than is available with the commercial NRT software is required. This limitation was circumvented by making multiple recordings for each stimulus using different time delays between the onset of stimulation and the onset of averaging. These recordings were then concatenated off-line. Matched recordings elicited using positive and negative polarity stimuli were added off-line to emphasize neural potentials (SUM) and subtracted off-line to emphasize potentials primarily generated by cochlear hair cells (DIF). These assumptions regarding the origin of the SUM and DIF components were tested by comparing the magnitude of these derived responses recorded using various stimulation rates. Magnitudes of the SUM and DIF components were compared with each other and with behavioral thresholds. Results: SUM and DIF components were identified for most subjects, consistent with both hair cell and neural responses to acoustic stimulation. For a subset of the study participants, the DIF components grew as stimulus level was increased, but little or no SUM components were identified. Latency of the CAPs in response to click stimuli was long relative to reports in the literature of recordings obtained using extracochlear electrodes. This difference in response latency and general morphology of the CAPs recorded was likely due to differences across subjects in hearing loss configuration. The use of high stimulation rates tended to decrease SUM and CAP components more than DIF components. We suggest this effect reflects neural adaptation. In some individuals, repeated measures were made over intervals as long as 9 months. Changes over time in DIF, SUM, and CAP thresholds mirrored changes in audiometric threshold for the subjects who experienced loss of acoustic hearing in the implanted ear. Conclusions: The Nucleus NRT software can be used to record peripheral responses to acoustic stimulation at threshold and suprathreshold levels, providing a window into the status of the auditory hair cells and the primary afferent nerve fibers. These acoustically evoked responses are sensitive to changes in hearing status and consequently could be useful in characterizing the specific pathophysiology of the hearing loss experienced by this population of CI users. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jgD0yF
via IFTTT

Benefits of Music Training for Perception of Emotional Speech Prosody in Deaf Children With Cochlear Implants.

wk-health-logo.gif

Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. Results: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. Conclusions: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jguIa0
via IFTTT

Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants.

Objectives: The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Design: Thirteen children (mean age +/- SD = 10 +/- 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90[degrees] azimuth to +90[degrees] azimuth (15[degrees] interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0[degrees]), from one of the two side speakers (+90[degrees] or -90[degrees]), from both side speakers (+/-90[degrees]). Speech, spatial, and quality questionnaire was administered. Results: When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Conclusions: Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jG0J7Q
via IFTTT

Using Neural Response Telemetry to Monitor Physiological Responses to Acoustic Stimulation in Hybrid Cochlear Implant Users.

Objective: This report describes the results of a series of experiments where we use the neural response telemetry (NRT) system of the Nucleus cochlear implant (CI) to measure the response of the peripheral auditory system to acoustic stimulation in Nucleus Hybrid CI users. The objectives of this study were to determine whether they could separate responses from hair cells and neurons and to evaluate the stability of these measures over time. Design: Forty-four CI users participated. They all had residual acoustic hearing and used a Nucleus Hybrid S8, S12, or L24 CI or the standard lateral wall CI422 implant. The NRT system of the CI was used to trigger an acoustic stimulus (500-Hz tone burst or click), which was presented at a low stimulation rate (10, 15, or 50 per second) to the implanted ear via an insert earphone and to record the cochlear microphonic, the auditory nerve neurophonic and the compound action potential (CAP) from an apical intracochlear electrode. To record acoustically evoked responses, a longer time window than is available with the commercial NRT software is required. This limitation was circumvented by making multiple recordings for each stimulus using different time delays between the onset of stimulation and the onset of averaging. These recordings were then concatenated off-line. Matched recordings elicited using positive and negative polarity stimuli were added off-line to emphasize neural potentials (SUM) and subtracted off-line to emphasize potentials primarily generated by cochlear hair cells (DIF). These assumptions regarding the origin of the SUM and DIF components were tested by comparing the magnitude of these derived responses recorded using various stimulation rates. Magnitudes of the SUM and DIF components were compared with each other and with behavioral thresholds. Results: SUM and DIF components were identified for most subjects, consistent with both hair cell and neural responses to acoustic stimulation. For a subset of the study participants, the DIF components grew as stimulus level was increased, but little or no SUM components were identified. Latency of the CAPs in response to click stimuli was long relative to reports in the literature of recordings obtained using extracochlear electrodes. This difference in response latency and general morphology of the CAPs recorded was likely due to differences across subjects in hearing loss configuration. The use of high stimulation rates tended to decrease SUM and CAP components more than DIF components. We suggest this effect reflects neural adaptation. In some individuals, repeated measures were made over intervals as long as 9 months. Changes over time in DIF, SUM, and CAP thresholds mirrored changes in audiometric threshold for the subjects who experienced loss of acoustic hearing in the implanted ear. Conclusions: The Nucleus NRT software can be used to record peripheral responses to acoustic stimulation at threshold and suprathreshold levels, providing a window into the status of the auditory hair cells and the primary afferent nerve fibers. These acoustically evoked responses are sensitive to changes in hearing status and consequently could be useful in characterizing the specific pathophysiology of the hearing loss experienced by this population of CI users. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jgD0yF
via IFTTT

Characterization of Vocal Fold Vibration in Sulcus Vocalis Using High-Speed Digital Imaging

Purpose
The aim of the present study was to qualitatively and quantitatively characterize vocal fold vibrations in sulcus vocalis by high-speed digital imaging (HSDI) and to clarify the correlations between HSDI-derived parameters and traditional vocal parameters.
Method
HSDI was performed in 20 vocally healthy subjects (8 men and 12 women) and 41 patients with sulcus vocalis (33 men and 8 women). Then HSDI data were evaluated by assessing the visual–perceptual rating, digital kymography, and glottal area waveform.
Results
Patients with sulcus vocalis frequently had spindle-shaped glottal gaps and a decreased mucosal wave. Compared with the control group, the sulcus vocalis group showed higher open quotient as well as a shorter duration of the visible mucosal wave, a smaller speed index, and a smaller glottal area difference index ([maximal glottal area – minimal glottal area]/maximal glottal area). These parameters deteriorated in order of the control group and Type I, II, and III sulcus vocalis. There were no gender-related differences. Strong correlations were noted between the open quotient and the type of sulcus vocalis.
Conclusions
HSDI was an effective method for documenting the characteristics of vocal fold vibrations in patients with sulcus vocalis and estimating the severity of dysphonia.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iXvacG
via IFTTT

A Clinical Evaluation of the Competing Sources of Input Hypothesis

Purpose
Our purpose was to test the competing sources of input (CSI) hypothesis by evaluating an intervention based on its principles. This hypothesis proposes that children's use of main verbs without tense is the result of their treating certain sentence types in the input (e.g., Was she laughing ?) as models for declaratives (e.g., She laughing).
Method
Twenty preschoolers with specific language impairment were randomly assigned to receive either a CSI-based intervention or a more traditional intervention that lacked the novel CSI features. The auxiliary is and the third-person singular suffix –s were directly treated over a 16-week period. Past tense –ed was monitored as a control.
Results
The CSI-based group exhibited greater improvements in use of is than did the traditional group (d = 1.31), providing strong support for the CSI hypothesis. There were no significant between-groups differences in the production of the third-person singular suffix –s or the control (–ed), however.
Conclusions
The group differences in the effects on the 2 treated morphemes may be due to differences in their distribution in interrogatives and declaratives (e.g., Is he hiding/He is hiding vs. Does he hide/He hide s ). Refinements in the intervention could address this issue and lead to more general effects across morphemes.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jbmc9h
via IFTTT

Cognitive Load in Voice Therapy Carry-Over Exercises

Purpose
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication.
Method
Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire.
Results
Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks.
Conclusions
The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2hMEGyu
via IFTTT

Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand

Purpose
International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand.
Method
Thirty-three male YORs, aged 14–17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test.
Results
Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI.
Conclusions
Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2j92JGB
via IFTTT

The Impact of Contrastive Stress on Vowel Acoustics and Intelligibility in Dysarthria

Purpose
To compare vowel acoustics and intelligibility in words produced with and without contrastive stress by speakers with spastic (mixed-spastic) dysarthria secondary to cerebral palsy (DYSCP) and healthy controls (HCs).
Method
Fifteen participants (9 men, 6 women; age M = 42 years) with DYSCP and 15 HCs (9 men, 6 women; age M = 36 years) produced sentences containing target words with and without contrastive stress. Forty-five healthy listeners (age M = 25 years) completed a vowel identification task of DYSCP productions. Vowel acoustics were compared across stress conditions and groups using 1st (F1) and 2nd (F2) formant measures. Perceptual intelligibility was compared across stress conditions and dysarthria severity.
Results
F1 and F2 significantly increased in stressed words for both groups, although the degree of change differed. Mean Euclidian distance between vowels also increased with stress. The relative probability of vowels falling within the target F1 × F2 space was greater for HCs but did not differ with stress. Stress production resulted in greater listener vowel identification accuracy for speakers with mild dysarthria.
Conclusions
Contrastive stress affected vowel formants for both groups. Perceptual results suggest that some speakers with dysarthria may benefit from a contrastive stress strategy to improve vowel intelligibility.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jbhBnE
via IFTTT

The Interaction of Lexical Characteristics and Speech Production in Parkinson's Disease

Purpose
This study sought to investigate the interaction of speech movement execution with higher order lexical parameters. The authors examined how lexical characteristics affect speech output in individuals with Parkinson's disease (PD) and healthy control (HC) speakers.
Method
Twenty speakers with PD and 12 healthy speakers read sentences with target words that varied in word frequency and neighborhood density. The formant transitions (F2 slopes) of the diphthongs in the target words were compared across lexical categories between PD and HC groups.
Results
Both groups of speakers produced steeper F2 slopes for the diphthongs in less frequent words and words from sparse neighborhoods. The magnitude of the increase in F2 slopes was significantly less in the PD than HC group. The lexical effect on the F2 slope differed among the diphthongs and between the 2 groups.
Conclusions
PD and healthy speakers varied their acoustic output on the basis of word frequency and neighborhood density. F2 slope variations can be traced to higher level lexical differences. This lexical effect on articulation, however, appears to be constrained by PD.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jfTmYo
via IFTTT

Hearing Impairment and Undiagnosed Disease: The Potential Role of Clinical Recommendations

Purpose
The objective of this study was to use cross-sectional, nationally representative data to examine the relationship between self-reported hearing impairment and undetected diabetes, hypertension, hypercholesterolemia, and chronic kidney disease.
Method
We analyzed the National Health and Nutrition Examination Survey for the years 2007–2012 for individuals 40 years of age and older without previously diagnosed cardiovascular disease. Analyses were conducted examining hearing impairment and undiagnosed disease.
Results
The unweighted sample size was 9,786, representing 123,444,066 Americans. Hearing impairment was reported in 10.2% of the individuals. In unadjusted analyses, there was no significant difference between adults with hearing impairment and adults with typical hearing for undiagnosed diabetes, hypertension, or hypercholesterolemia. A higher proportion of adults with hearing impairment than adults with typical hearing had undiagnosed chronic kidney disease (20.1% vs. 10.7%; p = .0001). In models adjusting for demographics and health care utilization, hearing impairment was associated with a higher likelihood of having undiagnosed chronic kidney disease (odds ratio = 1.53, 95% CI [1.23, 1.91]).
Conclusions
Individuals with hearing impairment are more likely to have undiagnosed chronic kidney disease. Hearing impairment may affect disclosure of important signs and symptoms as well as the comprehension of medical conversations for chronic disease management. General practitioners can play a critical role in improving medical communication by responding with sensitivity to the signs of hearing impairment in their patients.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2igpdEm
via IFTTT

Pure-Tone–Spondee Threshold Relationships in Functional Hearing Loss: A Test of Loudness Contribution

Purpose
The purpose of this article is to examine explanations for pure-tone average–spondee threshold differences in functional hearing loss.
Method
Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20–90 dB SPL). Participants listened monaurally through earphones. Loudness predictions were obtained for the same stimuli by using a computational, dynamic loudness model.
Results
When evaluated at the same SPL, speech-shaped noise was judged louder than vowels/spondees, which were judged louder than tones. Equal-loudness levels were inferred from fitted loudness functions for the group. For the clinical application, the 2.1-dB difference between spondees and tones at equal loudness became a 12.1-dB difference when the stimuli were converted from SPL to HL.
Conclusions
Nearly all of the pure-tone average–spondee threshold differences in functional hearing loss are attributable to references for calibration for 0 dB HL for tones and speech, which are based on detection and recognition, respectively. The recognition threshold for spondees is roughly 9 dB higher than the speech detection threshold; persons feigning a loss, who base loss magnitude on loudness, do not consider this difference. Furthermore, the dynamic loudness model was more accurate than the static model.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ho41Mm
via IFTTT

Directional Microphone Hearing Aids in School Environments: Working Toward Optimization

Purpose
The hearing aid microphone setting (omnidirectional or directional) can be selected manually or automatically. This study examined the percentage of time the microphone setting selected using each method was judged to provide the best signal-to-noise ratio (SNR) for the talkers of interest in school environments.
Method
A total of 26 children (aged 6–17 years) with hearing loss were fitted with study hearing aids and evaluated during 2 typical school days. Time-stamped hearing aid settings were compared with observer judgments of the microphone setting that provided the best SNR on the basis of the specific listening environment.
Results
Despite training for appropriate use, school-age children were unlikely to consistently manually switch to the microphone setting that optimized SNR. Furthermore, there was only fair agreement between the observer judgments and the hearing aid setting chosen by the automatic switching algorithm. Factors contributing to disagreement included the hearing aid algorithm choosing the directional setting when the talker was not in front of the listener or when noise arrived only from the front quadrant and choosing the omnidirectional setting when the noise level was low.
Conclusion
Consideration of listener preferences, talker position, sound level, and other factors in the classroom may be necessary to optimize microphone settings.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jfui0p
via IFTTT

A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia

Purpose
Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia.
Method
We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories.
Results
Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories.
Conclusions
The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2igpKpL
via IFTTT

Voice-Related Patient-Reported Outcome Measures: A Systematic Review of Instrument Development and Validation

Purpose
The purpose of this study was to perform a comprehensive systematic review of the literature on voice-related patient-reported outcome (PRO) measures in adults and to evaluate each instrument for the presence of important measurement properties.
Method
MEDLINE, the Cumulative Index of Nursing and Allied Health Literature, and the Health and Psychosocial Instrument databases were searched using relevant vocabulary terms and key terms related to PRO measures and voice. Inclusion and exclusion criteria were developed in consultation with an expert panel. Three independent investigators assessed study methodology using criteria developed a priori. Measurement properties were examined and entered into evidence tables.
Results
A total of 3,744 studies assessing voice-related constructs were identified. This list was narrowed to 32 PRO measures on the basis of predetermined inclusion and exclusion criteria. Questionnaire measurement properties varied widely. Important thematic deficiencies were apparent: (a) lack of patient involvement in the item development process, (b) lack of robust construct validity, and (c) lack of clear interpretability and scaling.
Conclusions
PRO measures are a principal means of evaluating treatment effectiveness in voice-related conditions. Despite their prominence, available PRO measures have disparate methodological rigor. Care must be taken to understand the psychometric and measurement properties and the applicability of PRO measures before advocating for their use in clinical or research applications.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ifll7c
via IFTTT

Auditory Training With Multiple Talkers and Passage-Based Semantic Cohesion

Purpose
Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level top-down processing to produce more generalized learning.
Method
Normal-hearing participants (N = 64) were randomly assigned to 1 of 4 auditory training protocols using noise-vocoded speech simulating the processing of an 8-channel cochlear implant: sentence-based single-talker training, training with 24 different talkers, passage-based transcription training, and a control (transcribing unvocoded sentence materials). In all cases, participants completed 2 pretests under cochlear implant simulation, 1 hr of training, and 5 posttests to assess perceptual learning and cross-context generalization.
Results
Performance above the control was seen in all 3 experimental groups for sentence recognition in quiet. In addition, the multitalker training method generalized to a context word-recognition task, and the passage training method caused gains in sentence recognition in noise.
Conclusion
The gains of the multitalker and passage training groups over the control suggest that, with relatively small modifications, improvements to the generalized outcomes of auditory training protocols may be possible.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2hI318I
via IFTTT

Children's Use of Semantic Context in Perception of Foreign-Accented Speech

Purpose
The purpose of this study is to evaluate children's use of semantic context to facilitate foreign-accented word recognition in noise.
Method
Monolingual American English speaking 5- to 7-year-olds (n = 168) repeated either Mandarin- or American English–accented sentences in babble, half of which contained final words that were highly predictable from context. The same final words were presented in the low- and high-predictability sentences.
Results
Word recognition scores were better in the high- than low-predictability contexts. Scores improved with age and were higher for the native than the Mandarin accent. The oldest children saw the greatest benefit from context; however, context benefit was similar regardless of speaker accent.
Conclusion
Despite significant acoustic-phonetic deviations from native norms, young children capitalize on contextual cues when presented with foreign-accented speech. Implications for spoken word recognition in children with speech, language, and hearing differences are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2igqPxY
via IFTTT

Gap Detection in School-Age Children and Adults: Center Frequency and Ramp Duration

Purpose
The age at which gap detection becomes adultlike differs, depending on the stimulus characteristics. The present study evaluated whether the developmental trajectory differs as a function of stimulus frequency region or duration of the onset and offset ramps bounding the gap.
Method
Thresholds were obtained for wideband noise (500–4500 Hz) with 4- or 40-ms raised-cosine ramps and for a 25-Hz-wide low-fluctuation narrowband noise centered on either 500 or 5000 Hz with 40-ms ramps. Stimuli were played continuously at 70 dB SPL, and the task was to indicate which of 3 intervals contained a gap. Listeners were 5.2- to 15.1-year-old children (n = 40) and adults (n = 10) with normal hearing.
Results
Regardless of listener age, gap detection thresholds for the wideband noise tended to be lower when gaps were shaped using 4-ms rather than 40-ms ramps. Thresholds also tended to be lower for the low-fluctuation narrowband noise centered on 5000 Hz than 500 Hz. Performance reached adult levels after 11 years of age for all 4 stimuli. Maturation was not uniform across individuals, however; a subset of young children performed like adults, including some 5-year-olds.
Conclusion
For these stimuli, the developmental trajectory was similar regardless of narrowband noise center frequency or wideband noise onset and offset ramp duration.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jbbOyk
via IFTTT

Simulated Critical Differences for Speech Reception Thresholds

Purpose
Critical differences state by how much 2 test results have to differ in order to be significantly different. Critical differences for discrimination scores have been available for several decades, but they do not exist for speech reception thresholds (SRTs). This study presents and discusses how critical differences for SRTs can be estimated by Monte Carlo simulations. As an application of this method, critical differences are proposed for a 5-word sentences test (a matrix test) using 2 widely implemented adaptive test procedures.
Method
For each procedure, simulations were performed for different parameters: the number of test sentences, the j factor, the distribution of the subjects' true SRTs, and the slope of the discrimination function. For 1 procedure and 1 parameter setting, simulation data are compared with results found by listening tests (experimental data).
Results
The critical differences were found to depend on the parameters tested, including interactive effects. The critical differences found by simulation agree with data found experimentally.
Conclusions
As the critical differences for SRTs rely on multiple parameters, they must be determined for each parameter setting individually. However, with knowledge of the test setup, rules of thumb can be derived.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2j2kw4g
via IFTTT

Economic Impact of Hearing Loss and Reduction of Noise-Induced Hearing Loss in the United States

Purpose
Hearing loss (HL) is pervasive and debilitating, and noise-induced HL is preventable by reducing environmental noise. Lack of economic analyses of HL impacts means that prevention and treatment remain a low priority for public health and environmental investment.
Method
This article estimates the costs of HL on productivity by building on established estimates for HL prevalence and wage and employment differentials between those with and without HL.
Results
We estimate that HL affects more than 13% of the working population. Not all HL can be prevented or treated, but if the 20% of HL resulting from excessive noise exposure were prevented, the economic benefit would be substantial—we estimate a range of $58 billion to $152 billion annually, with a core estimate of $123 billion. We believe this is a conservative estimate, because consideration of additional costs of HL, including health care and special education, would likely further increase the benefits associated with HL prevention.
Conclusion
HL is costly and warrants additional emphasis in public and environmental health programs. This study represents an important first step in valuing HL prevention—in particular, prevention of noise-induced HL—where new policies and technologies appear promising.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2hVYzj7
via IFTTT

Self-Assessed Hearing Handicap in Older Adults With Poorer-Than-Predicted Speech Recognition in Noise

Purpose
Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility.
Method
We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility.
Results
One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds.
Conclusion
The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jbiCvB
via IFTTT

English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

Purpose
We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English.
Method
In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time.
Results
Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern.
Conclusions
Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jfN8bc
via IFTTT

A Longitudinal Study in Children With Sequential Bilateral Cochlear Implants: Time Course for the Second Implanted Ear and Bilateral Performance

Purpose
Whether, and if so when, a second-ear cochlear implant should be provided to older, unilaterally implanted children is an ongoing clinical question. This study evaluated rate of speech recognition progress for the second implanted ear and with bilateral cochlear implants in older sequentially implanted children and evaluated localization abilities.
Method
A prospective longitudinal study included 24 bilaterally implanted children (mean ear surgeries at 5.11 and 14.25 years). Test intervals were every 3–6 months through 24 months postbilateral. Test conditions were each ear and bilaterally for speech recognition and localization.
Results
Overall, the rate of progress for the second implanted ear was gradual. Improvements in quiet continued through the second year of bilateral use. Improvements in noise were more modest and leveled off during the second year. On all measures, results from the second ear were poorer than the first. Bilateral scores were better than either ear alone for all measures except sentences in quiet and localization.
Conclusions
Older sequentially implanted children with several years between surgeries may obtain speech understanding in the second implanted ear; however, performance may be limited and rate of progress gradual. Continued contralateral ear hearing aid use and reduced time between surgeries may enhance outcomes.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iXsETA
via IFTTT

The Effects of Directional Processing on Objective and Subjective Listening Effort

Purpose
The purposes of this investigation were (a) to evaluate the effects of hearing aid directional processing on subjective and objective listening effort and (b) to investigate the potential relationships between subjective and objective measures of effort.
Method
Sixteen adults with mild to severe hearing loss were tested with study hearing aids programmed with 3 settings: omnidirectional, fixed directional, and bilateral beamformer. A dual-task paradigm and subjective ratings were used to assess objective and subjective listening effort, respectively, in 2 signal-to-noise ratios. Testing occurred in rooms with either low or moderate reverberation.
Results
Directional processing improved subjective and objective listening effort, although benefit for objective effort was found only in moderate reverberation. Subjective reports of work and tiredness were more highly correlated with word recognition performance than objective listening effort. However, subjective ratings about control were significantly correlated with objective listening effort.
Conclusions
Directional microphone technology in hearing aids has the potential to improve listening effort in moderately reverberant environments. In addition, subjective questions that probe a listener's desire to exercise control may be a viable method for eliciting ratings that are significantly related to objective listening effort.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iXxkZR
via IFTTT

Cognitive Load in Voice Therapy Carry-Over Exercises

Purpose
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication.
Method
Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire.
Results
Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks.
Conclusions
The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.

from #Audiology via ola Kala on Inoreader http://ift.tt/2hMEGyu
via IFTTT

Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand

Purpose
International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand.
Method
Thirty-three male YORs, aged 14–17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test.
Results
Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI.
Conclusions
Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.

from #Audiology via ola Kala on Inoreader http://ift.tt/2j92JGB
via IFTTT

The Interaction of Lexical Characteristics and Speech Production in Parkinson's Disease

Purpose
This study sought to investigate the interaction of speech movement execution with higher order lexical parameters. The authors examined how lexical characteristics affect speech output in individuals with Parkinson's disease (PD) and healthy control (HC) speakers.
Method
Twenty speakers with PD and 12 healthy speakers read sentences with target words that varied in word frequency and neighborhood density. The formant transitions (F2 slopes) of the diphthongs in the target words were compared across lexical categories between PD and HC groups.
Results
Both groups of speakers produced steeper F2 slopes for the diphthongs in less frequent words and words from sparse neighborhoods. The magnitude of the increase in F2 slopes was significantly less in the PD than HC group. The lexical effect on the F2 slope differed among the diphthongs and between the 2 groups.
Conclusions
PD and healthy speakers varied their acoustic output on the basis of word frequency and neighborhood density. F2 slope variations can be traced to higher level lexical differences. This lexical effect on articulation, however, appears to be constrained by PD.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jfTmYo
via IFTTT

Hearing Impairment and Undiagnosed Disease: The Potential Role of Clinical Recommendations

Purpose
The objective of this study was to use cross-sectional, nationally representative data to examine the relationship between self-reported hearing impairment and undetected diabetes, hypertension, hypercholesterolemia, and chronic kidney disease.
Method
We analyzed the National Health and Nutrition Examination Survey for the years 2007–2012 for individuals 40 years of age and older without previously diagnosed cardiovascular disease. Analyses were conducted examining hearing impairment and undiagnosed disease.
Results
The unweighted sample size was 9,786, representing 123,444,066 Americans. Hearing impairment was reported in 10.2% of the individuals. In unadjusted analyses, there was no significant difference between adults with hearing impairment and adults with typical hearing for undiagnosed diabetes, hypertension, or hypercholesterolemia. A higher proportion of adults with hearing impairment than adults with typical hearing had undiagnosed chronic kidney disease (20.1% vs. 10.7%; p = .0001). In models adjusting for demographics and health care utilization, hearing impairment was associated with a higher likelihood of having undiagnosed chronic kidney disease (odds ratio = 1.53, 95% CI [1.23, 1.91]).
Conclusions
Individuals with hearing impairment are more likely to have undiagnosed chronic kidney disease. Hearing impairment may affect disclosure of important signs and symptoms as well as the comprehension of medical conversations for chronic disease management. General practitioners can play a critical role in improving medical communication by responding with sensitivity to the signs of hearing impairment in their patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/2igpdEm
via IFTTT

Pure-Tone–Spondee Threshold Relationships in Functional Hearing Loss: A Test of Loudness Contribution

Purpose
The purpose of this article is to examine explanations for pure-tone average–spondee threshold differences in functional hearing loss.
Method
Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20–90 dB SPL). Participants listened monaurally through earphones. Loudness predictions were obtained for the same stimuli by using a computational, dynamic loudness model.
Results
When evaluated at the same SPL, speech-shaped noise was judged louder than vowels/spondees, which were judged louder than tones. Equal-loudness levels were inferred from fitted loudness functions for the group. For the clinical application, the 2.1-dB difference between spondees and tones at equal loudness became a 12.1-dB difference when the stimuli were converted from SPL to HL.
Conclusions
Nearly all of the pure-tone average–spondee threshold differences in functional hearing loss are attributable to references for calibration for 0 dB HL for tones and speech, which are based on detection and recognition, respectively. The recognition threshold for spondees is roughly 9 dB higher than the speech detection threshold; persons feigning a loss, who base loss magnitude on loudness, do not consider this difference. Furthermore, the dynamic loudness model was more accurate than the static model.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ho41Mm
via IFTTT

A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia

Purpose
Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia.
Method
We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories.
Results
Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories.
Conclusions
The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.

from #Audiology via ola Kala on Inoreader http://ift.tt/2igpKpL
via IFTTT

Voice-Related Patient-Reported Outcome Measures: A Systematic Review of Instrument Development and Validation

Purpose
The purpose of this study was to perform a comprehensive systematic review of the literature on voice-related patient-reported outcome (PRO) measures in adults and to evaluate each instrument for the presence of important measurement properties.
Method
MEDLINE, the Cumulative Index of Nursing and Allied Health Literature, and the Health and Psychosocial Instrument databases were searched using relevant vocabulary terms and key terms related to PRO measures and voice. Inclusion and exclusion criteria were developed in consultation with an expert panel. Three independent investigators assessed study methodology using criteria developed a priori. Measurement properties were examined and entered into evidence tables.
Results
A total of 3,744 studies assessing voice-related constructs were identified. This list was narrowed to 32 PRO measures on the basis of predetermined inclusion and exclusion criteria. Questionnaire measurement properties varied widely. Important thematic deficiencies were apparent: (a) lack of patient involvement in the item development process, (b) lack of robust construct validity, and (c) lack of clear interpretability and scaling.
Conclusions
PRO measures are a principal means of evaluating treatment effectiveness in voice-related conditions. Despite their prominence, available PRO measures have disparate methodological rigor. Care must be taken to understand the psychometric and measurement properties and the applicability of PRO measures before advocating for their use in clinical or research applications.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ifll7c
via IFTTT

Auditory Training With Multiple Talkers and Passage-Based Semantic Cohesion

Purpose
Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level top-down processing to produce more generalized learning.
Method
Normal-hearing participants (N = 64) were randomly assigned to 1 of 4 auditory training protocols using noise-vocoded speech simulating the processing of an 8-channel cochlear implant: sentence-based single-talker training, training with 24 different talkers, passage-based transcription training, and a control (transcribing unvocoded sentence materials). In all cases, participants completed 2 pretests under cochlear implant simulation, 1 hr of training, and 5 posttests to assess perceptual learning and cross-context generalization.
Results
Performance above the control was seen in all 3 experimental groups for sentence recognition in quiet. In addition, the multitalker training method generalized to a context word-recognition task, and the passage training method caused gains in sentence recognition in noise.
Conclusion
The gains of the multitalker and passage training groups over the control suggest that, with relatively small modifications, improvements to the generalized outcomes of auditory training protocols may be possible.

from #Audiology via ola Kala on Inoreader http://ift.tt/2hI318I
via IFTTT

Children's Use of Semantic Context in Perception of Foreign-Accented Speech

Purpose
The purpose of this study is to evaluate children's use of semantic context to facilitate foreign-accented word recognition in noise.
Method
Monolingual American English speaking 5- to 7-year-olds (n = 168) repeated either Mandarin- or American English–accented sentences in babble, half of which contained final words that were highly predictable from context. The same final words were presented in the low- and high-predictability sentences.
Results
Word recognition scores were better in the high- than low-predictability contexts. Scores improved with age and were higher for the native than the Mandarin accent. The oldest children saw the greatest benefit from context; however, context benefit was similar regardless of speaker accent.
Conclusion
Despite significant acoustic-phonetic deviations from native norms, young children capitalize on contextual cues when presented with foreign-accented speech. Implications for spoken word recognition in children with speech, language, and hearing differences are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2igqPxY
via IFTTT

Economic Impact of Hearing Loss and Reduction of Noise-Induced Hearing Loss in the United States

Purpose
Hearing loss (HL) is pervasive and debilitating, and noise-induced HL is preventable by reducing environmental noise. Lack of economic analyses of HL impacts means that prevention and treatment remain a low priority for public health and environmental investment.
Method
This article estimates the costs of HL on productivity by building on established estimates for HL prevalence and wage and employment differentials between those with and without HL.
Results
We estimate that HL affects more than 13% of the working population. Not all HL can be prevented or treated, but if the 20% of HL resulting from excessive noise exposure were prevented, the economic benefit would be substantial—we estimate a range of $58 billion to $152 billion annually, with a core estimate of $123 billion. We believe this is a conservative estimate, because consideration of additional costs of HL, including health care and special education, would likely further increase the benefits associated with HL prevention.
Conclusion
HL is costly and warrants additional emphasis in public and environmental health programs. This study represents an important first step in valuing HL prevention—in particular, prevention of noise-induced HL—where new policies and technologies appear promising.

from #Audiology via ola Kala on Inoreader http://ift.tt/2hVYzj7
via IFTTT

English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

Purpose
We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English.
Method
In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time.
Results
Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern.
Conclusions
Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jfN8bc
via IFTTT

Cognitive Load in Voice Therapy Carry-Over Exercises

Purpose
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication.
Method
Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire.
Results
Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks.
Conclusions
The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.

from #Audiology via ola Kala on Inoreader http://ift.tt/2hMEGyu
via IFTTT

Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand

Purpose
International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand.
Method
Thirty-three male YORs, aged 14–17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test.
Results
Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI.
Conclusions
Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.

from #Audiology via ola Kala on Inoreader http://ift.tt/2j92JGB
via IFTTT

The Interaction of Lexical Characteristics and Speech Production in Parkinson's Disease

Purpose
This study sought to investigate the interaction of speech movement execution with higher order lexical parameters. The authors examined how lexical characteristics affect speech output in individuals with Parkinson's disease (PD) and healthy control (HC) speakers.
Method
Twenty speakers with PD and 12 healthy speakers read sentences with target words that varied in word frequency and neighborhood density. The formant transitions (F2 slopes) of the diphthongs in the target words were compared across lexical categories between PD and HC groups.
Results
Both groups of speakers produced steeper F2 slopes for the diphthongs in less frequent words and words from sparse neighborhoods. The magnitude of the increase in F2 slopes was significantly less in the PD than HC group. The lexical effect on the F2 slope differed among the diphthongs and between the 2 groups.
Conclusions
PD and healthy speakers varied their acoustic output on the basis of word frequency and neighborhood density. F2 slope variations can be traced to higher level lexical differences. This lexical effect on articulation, however, appears to be constrained by PD.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jfTmYo
via IFTTT

Hearing Impairment and Undiagnosed Disease: The Potential Role of Clinical Recommendations

Purpose
The objective of this study was to use cross-sectional, nationally representative data to examine the relationship between self-reported hearing impairment and undetected diabetes, hypertension, hypercholesterolemia, and chronic kidney disease.
Method
We analyzed the National Health and Nutrition Examination Survey for the years 2007–2012 for individuals 40 years of age and older without previously diagnosed cardiovascular disease. Analyses were conducted examining hearing impairment and undiagnosed disease.
Results
The unweighted sample size was 9,786, representing 123,444,066 Americans. Hearing impairment was reported in 10.2% of the individuals. In unadjusted analyses, there was no significant difference between adults with hearing impairment and adults with typical hearing for undiagnosed diabetes, hypertension, or hypercholesterolemia. A higher proportion of adults with hearing impairment than adults with typical hearing had undiagnosed chronic kidney disease (20.1% vs. 10.7%; p = .0001). In models adjusting for demographics and health care utilization, hearing impairment was associated with a higher likelihood of having undiagnosed chronic kidney disease (odds ratio = 1.53, 95% CI [1.23, 1.91]).
Conclusions
Individuals with hearing impairment are more likely to have undiagnosed chronic kidney disease. Hearing impairment may affect disclosure of important signs and symptoms as well as the comprehension of medical conversations for chronic disease management. General practitioners can play a critical role in improving medical communication by responding with sensitivity to the signs of hearing impairment in their patients.

from #Audiology via ola Kala on Inoreader http://ift.tt/2igpdEm
via IFTTT

Pure-Tone–Spondee Threshold Relationships in Functional Hearing Loss: A Test of Loudness Contribution

Purpose
The purpose of this article is to examine explanations for pure-tone average–spondee threshold differences in functional hearing loss.
Method
Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20–90 dB SPL). Participants listened monaurally through earphones. Loudness predictions were obtained for the same stimuli by using a computational, dynamic loudness model.
Results
When evaluated at the same SPL, speech-shaped noise was judged louder than vowels/spondees, which were judged louder than tones. Equal-loudness levels were inferred from fitted loudness functions for the group. For the clinical application, the 2.1-dB difference between spondees and tones at equal loudness became a 12.1-dB difference when the stimuli were converted from SPL to HL.
Conclusions
Nearly all of the pure-tone average–spondee threshold differences in functional hearing loss are attributable to references for calibration for 0 dB HL for tones and speech, which are based on detection and recognition, respectively. The recognition threshold for spondees is roughly 9 dB higher than the speech detection threshold; persons feigning a loss, who base loss magnitude on loudness, do not consider this difference. Furthermore, the dynamic loudness model was more accurate than the static model.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ho41Mm
via IFTTT

A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia

Purpose
Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia.
Method
We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories.
Results
Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories.
Conclusions
The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.

from #Audiology via ola Kala on Inoreader http://ift.tt/2igpKpL
via IFTTT

Voice-Related Patient-Reported Outcome Measures: A Systematic Review of Instrument Development and Validation

Purpose
The purpose of this study was to perform a comprehensive systematic review of the literature on voice-related patient-reported outcome (PRO) measures in adults and to evaluate each instrument for the presence of important measurement properties.
Method
MEDLINE, the Cumulative Index of Nursing and Allied Health Literature, and the Health and Psychosocial Instrument databases were searched using relevant vocabulary terms and key terms related to PRO measures and voice. Inclusion and exclusion criteria were developed in consultation with an expert panel. Three independent investigators assessed study methodology using criteria developed a priori. Measurement properties were examined and entered into evidence tables.
Results
A total of 3,744 studies assessing voice-related constructs were identified. This list was narrowed to 32 PRO measures on the basis of predetermined inclusion and exclusion criteria. Questionnaire measurement properties varied widely. Important thematic deficiencies were apparent: (a) lack of patient involvement in the item development process, (b) lack of robust construct validity, and (c) lack of clear interpretability and scaling.
Conclusions
PRO measures are a principal means of evaluating treatment effectiveness in voice-related conditions. Despite their prominence, available PRO measures have disparate methodological rigor. Care must be taken to understand the psychometric and measurement properties and the applicability of PRO measures before advocating for their use in clinical or research applications.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ifll7c
via IFTTT

Auditory Training With Multiple Talkers and Passage-Based Semantic Cohesion

Purpose
Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level top-down processing to produce more generalized learning.
Method
Normal-hearing participants (N = 64) were randomly assigned to 1 of 4 auditory training protocols using noise-vocoded speech simulating the processing of an 8-channel cochlear implant: sentence-based single-talker training, training with 24 different talkers, passage-based transcription training, and a control (transcribing unvocoded sentence materials). In all cases, participants completed 2 pretests under cochlear implant simulation, 1 hr of training, and 5 posttests to assess perceptual learning and cross-context generalization.
Results
Performance above the control was seen in all 3 experimental groups for sentence recognition in quiet. In addition, the multitalker training method generalized to a context word-recognition task, and the passage training method caused gains in sentence recognition in noise.
Conclusion
The gains of the multitalker and passage training groups over the control suggest that, with relatively small modifications, improvements to the generalized outcomes of auditory training protocols may be possible.

from #Audiology via ola Kala on Inoreader http://ift.tt/2hI318I
via IFTTT

Children's Use of Semantic Context in Perception of Foreign-Accented Speech

Purpose
The purpose of this study is to evaluate children's use of semantic context to facilitate foreign-accented word recognition in noise.
Method
Monolingual American English speaking 5- to 7-year-olds (n = 168) repeated either Mandarin- or American English–accented sentences in babble, half of which contained final words that were highly predictable from context. The same final words were presented in the low- and high-predictability sentences.
Results
Word recognition scores were better in the high- than low-predictability contexts. Scores improved with age and were higher for the native than the Mandarin accent. The oldest children saw the greatest benefit from context; however, context benefit was similar regardless of speaker accent.
Conclusion
Despite significant acoustic-phonetic deviations from native norms, young children capitalize on contextual cues when presented with foreign-accented speech. Implications for spoken word recognition in children with speech, language, and hearing differences are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2igqPxY
via IFTTT

Economic Impact of Hearing Loss and Reduction of Noise-Induced Hearing Loss in the United States

Purpose
Hearing loss (HL) is pervasive and debilitating, and noise-induced HL is preventable by reducing environmental noise. Lack of economic analyses of HL impacts means that prevention and treatment remain a low priority for public health and environmental investment.
Method
This article estimates the costs of HL on productivity by building on established estimates for HL prevalence and wage and employment differentials between those with and without HL.
Results
We estimate that HL affects more than 13% of the working population. Not all HL can be prevented or treated, but if the 20% of HL resulting from excessive noise exposure were prevented, the economic benefit would be substantial—we estimate a range of $58 billion to $152 billion annually, with a core estimate of $123 billion. We believe this is a conservative estimate, because consideration of additional costs of HL, including health care and special education, would likely further increase the benefits associated with HL prevention.
Conclusion
HL is costly and warrants additional emphasis in public and environmental health programs. This study represents an important first step in valuing HL prevention—in particular, prevention of noise-induced HL—where new policies and technologies appear promising.

from #Audiology via ola Kala on Inoreader http://ift.tt/2hVYzj7
via IFTTT

English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

Purpose
We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English.
Method
In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time.
Results
Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern.
Conclusions
Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jfN8bc
via IFTTT

Perceptual Error Analysis of Human and Synthesized Voices

alertIcon.gif

Publication date: Available online 12 January 2017
Source:Journal of Voice
Author(s): Marina Englert, Glaucya Madazio, Ingrid Gielow, Jorge Lucero, Mara Behlau
Objective/HypothesisTo assess the quality of synthesized voices through listeners' skills in discriminating human and synthesized voices.Study DesignProspective study.MethodsEighteen human voices with different types and degrees of deviation (roughness, breathiness, and strain, with three degrees of deviation: mild, moderate, and severe) were selected by three voice specialists. Synthesized samples with the same deviations of human voices were produced by the VoiceSim system. The manipulated parameters were vocal frequency perturbation (roughness), additive noise (breathiness), increasing tension, subglottal pressure, and decreasing vocal folds separation (strain). Two hundred sixty-nine listeners were divided in three groups: voice specialist speech language pathologists (V-SLPs), general clinician SLPs (G-SLPs), and naive listeners (NLs). The SLP listeners also indicated the type and degree of deviation.ResultsThe listeners misclassified 39.3% of the voices, both synthesized (42.3%) and human (36.4%) samples (P = 0.001). V-SLPs presented the lowest error percentage considering the voice nature (34.6%); G-SLPs and NLs identified almost half of the synthesized samples as human (46.9%, 45.6%). The male voices were more susceptible for misidentification. The synthesized breathy samples generated a greater perceptual confusion. The samples with severe deviation seemed to be more susceptible for errors. The synthesized female deviations were correctly classified. The male breathiness and strain were identified as roughness.ConclusionVoiceSim produced stimuli very similar to the voices of patients with dysphonia. V-SLPs had a better ability to classify human and synthesized voices. VoiceSim is better to simulate vocal breathiness and female deviations; the male samples need adjustment.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2j7LyFf
via IFTTT

Making sound waves: selected papers from the 2016 annual conference of the National Hearing Conservation Association

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2j7FPiJ
via IFTTT

The IJA system for systematic reviews: “the whys and hows”

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2ikjXUm
via IFTTT

Book Review

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2j7O7Y1
via IFTTT

An ecological approach to hearing-health promotion in workplaces

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2j7Jf52
via IFTTT

Making sound waves: selected papers from the 2016 annual conference of the National Hearing Conservation Association

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2j7FPiJ
via IFTTT

The IJA system for systematic reviews: “the whys and hows”

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2ikjXUm
via IFTTT

Book Review

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2j7O7Y1
via IFTTT

An ecological approach to hearing-health promotion in workplaces

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2j7Jf52
via IFTTT

Making sound waves: selected papers from the 2016 annual conference of the National Hearing Conservation Association

.


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2j7FPiJ
via IFTTT

The IJA system for systematic reviews: “the whys and hows”

.


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ikjXUm
via IFTTT

Book Review

.


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2j7O7Y1
via IFTTT

An ecological approach to hearing-health promotion in workplaces

.


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2j7Jf52
via IFTTT

An ecological approach to hearing-health promotion in workplaces.

Related Articles

An ecological approach to hearing-health promotion in workplaces.

Int J Audiol. 2017 Jan 12;:1-12

Authors: Reddy R, Welch D, Ameratunga S, Thorne P

Abstract
OBJECTIVE: To develop and assess use, acceptability and feasibility of an ecological hearing conservation programme for workplaces.
DESIGN: A school-based public health hearing preservation education programme (Dangerous Decibels®) was adapted for workplaces using the Multi-level Approach to Community Health (MATCH) Model. The programme was delivered in small manufacturing companies and evaluated using a questionnaire before the training and at one week and two-months after training.
STUDY SAMPLE: Workers (n = 56) from five small manufacturing companies were recruited.
RESULTS: There was a significant improvement in knowledge, attitudes and behaviour of workers at the intrapersonal level; in behaviour motivation and safety culture at the interpersonal and organisational levels; and an overall improvement in hearing-health behaviour after two months post-intervention.
CONCLUSIONS: The developed programme offers a simple, interactive and theory-based intervention that is well accepted and effective in promoting positive hearing-health behaviour in workplaces.

PMID: 28079408 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2inljsA
via IFTTT