Τετάρτη 4 Οκτωβρίου 2017

Generalization of Perceptual Learning of Degraded Speech Across Talkers

Purpose
We investigated whether perceptual learning of noise-vocoded (NV) speech is specific to a particular talker or accent.
Method
Four groups of listeners (n = 18 per group) were first trained by listening to 20 NV sentences that had been recorded by a talker with either the same native accent as the listeners or a different regional accent. They then heard 20 novel NV sentences from either the native- or nonnative-accented talker (test), in a 2 × 2 (Training Talker per Accent × Test Talker per Accent) design.
Results
Word-report scores at test for participants trained and tested with the same (native- or nonnative-accented) talker did not differ from those for participants trained with 1 talker per accent and tested on another.
Conclusions
Learning of NV speech generalized completely between talkers. Two additional experiments confirmed this result. Thus, when listeners are trained to understand NV speech, they are not learning talker- or accent-specific features but instead are learning how to use the information available in the degraded signal. The results suggest that people with cochlear implants, who experience spectrally degraded speech, may not be too disadvantaged if they learn to understand speech through their implant by listening primarily to just 1 other talker, such as a spouse.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0300/2657169/Generalization-of-Perceptual-Learning-of-Degraded
via IFTTT

Encoding Deficits Impede Word Learning and Memory in Adults With Developmental Language Disorders

Purpose
The aim of this study was to determine whether the word-learning challenges associated with developmental language disorder (DLD) result from encoding or retention deficits.
Method
In Study 1, 59 postsecondary students with DLD and 60 with normal development (ND) took the California Verbal Learning Test–Second Edition, Adult Version (Delis, Kramer, Kaplan, & Ober, 2000). In Study 2, 23 postsecondary students with DLD and 24 with ND attempted to learn 9 novel words in each of 3 training conditions: uncued test, cued test, and no test (passive study). Retention was measured 1 day and 1 week later.
Results
By the end of training, students with DLD had encoded fewer familiar words (Study 1) and fewer novel words (Study 2) than their ND peers as evinced by word recall. They also demonstrated poorer encoding as evinced by slower growth in recall from Trials 1 to 2 (Studies 1 and 2), less semantic clustering of recalled words, and poorer recognition (Study 1). The DLD and ND groups were similar in the relative amount of information they could recall after retention periods of 5 and 20 min (Study 1). After a 1-day retention period, the DLD group recalled less information that had been encoded via passive study, but they performed as well as their ND peers when recalling information that had been encoded via tests (Study 2). Compared to passive study, encoding via tests also resulted in more robust lexical engagement after a 1-week retention for DLD and ND groups.
Conclusions
Encoding, not retention, is the problematic stage of word learning for adults with DLD. Self-testing with feedback lessens the deficit.
Supplemental Materials
http://ift.tt/2y2PoJz

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0031/2657170/Encoding-Deficits-Impede-Word-Learning-and-Memory
via IFTTT

The History of Stuttering by 7 Years of Age: Follow-Up of a Prospective Community Cohort

Purpose
For a community cohort of children confirmed to have stuttered by the age of 4 years, we report (a) the recovery rate from stuttering, (b) predictors of recovery, and (c) comorbidities at the age of 7 years.
Method
This study was nested in the Early Language in Victoria Study. Predictors of stuttering recovery included child, family, and environmental measures and first-degree relative history of stuttering. Comorbidities examined at 7 years included temperament, language, nonverbal cognition, and health-related quality of life.
Results
The recovery rate by the age of 7 years was 65%. Girls with stronger communication skills at the age of 2 years had higher odds of recovery (adjusted OR = 7.1, 95% CI [1.3, 37.9], p = .02), but similar effects were not evident for boys (adjusted OR = 0.5, 95% CI [0.3, 1.1], p = .10). At the age of 7 years, children who had recovered from stuttering were more likely to have stronger language skills than children whose stuttering persisted (p = .05). No evident differences were identified on other outcomes including nonverbal cognition, temperament, and parent-reported quality of life.
Conclusion
Overall, findings suggested that there may be associations between language ability and recovery from stuttering. Subsequent research is needed to explore the directionality of this relationship.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0205/2657162/The-History-of-Stuttering-by-7-Years-of-Age
via IFTTT

Generalization of Perceptual Learning of Degraded Speech Across Talkers

Purpose
We investigated whether perceptual learning of noise-vocoded (NV) speech is specific to a particular talker or accent.
Method
Four groups of listeners (n = 18 per group) were first trained by listening to 20 NV sentences that had been recorded by a talker with either the same native accent as the listeners or a different regional accent. They then heard 20 novel NV sentences from either the native- or nonnative-accented talker (test), in a 2 × 2 (Training Talker per Accent × Test Talker per Accent) design.
Results
Word-report scores at test for participants trained and tested with the same (native- or nonnative-accented) talker did not differ from those for participants trained with 1 talker per accent and tested on another.
Conclusions
Learning of NV speech generalized completely between talkers. Two additional experiments confirmed this result. Thus, when listeners are trained to understand NV speech, they are not learning talker- or accent-specific features but instead are learning how to use the information available in the degraded signal. The results suggest that people with cochlear implants, who experience spectrally degraded speech, may not be too disadvantaged if they learn to understand speech through their implant by listening primarily to just 1 other talker, such as a spouse.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0300/2657169/Generalization-of-Perceptual-Learning-of-Degraded
via IFTTT

Encoding Deficits Impede Word Learning and Memory in Adults With Developmental Language Disorders

Purpose
The aim of this study was to determine whether the word-learning challenges associated with developmental language disorder (DLD) result from encoding or retention deficits.
Method
In Study 1, 59 postsecondary students with DLD and 60 with normal development (ND) took the California Verbal Learning Test–Second Edition, Adult Version (Delis, Kramer, Kaplan, & Ober, 2000). In Study 2, 23 postsecondary students with DLD and 24 with ND attempted to learn 9 novel words in each of 3 training conditions: uncued test, cued test, and no test (passive study). Retention was measured 1 day and 1 week later.
Results
By the end of training, students with DLD had encoded fewer familiar words (Study 1) and fewer novel words (Study 2) than their ND peers as evinced by word recall. They also demonstrated poorer encoding as evinced by slower growth in recall from Trials 1 to 2 (Studies 1 and 2), less semantic clustering of recalled words, and poorer recognition (Study 1). The DLD and ND groups were similar in the relative amount of information they could recall after retention periods of 5 and 20 min (Study 1). After a 1-day retention period, the DLD group recalled less information that had been encoded via passive study, but they performed as well as their ND peers when recalling information that had been encoded via tests (Study 2). Compared to passive study, encoding via tests also resulted in more robust lexical engagement after a 1-week retention for DLD and ND groups.
Conclusions
Encoding, not retention, is the problematic stage of word learning for adults with DLD. Self-testing with feedback lessens the deficit.
Supplemental Materials
http://ift.tt/2y2PoJz

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0031/2657170/Encoding-Deficits-Impede-Word-Learning-and-Memory
via IFTTT

The History of Stuttering by 7 Years of Age: Follow-Up of a Prospective Community Cohort

Purpose
For a community cohort of children confirmed to have stuttered by the age of 4 years, we report (a) the recovery rate from stuttering, (b) predictors of recovery, and (c) comorbidities at the age of 7 years.
Method
This study was nested in the Early Language in Victoria Study. Predictors of stuttering recovery included child, family, and environmental measures and first-degree relative history of stuttering. Comorbidities examined at 7 years included temperament, language, nonverbal cognition, and health-related quality of life.
Results
The recovery rate by the age of 7 years was 65%. Girls with stronger communication skills at the age of 2 years had higher odds of recovery (adjusted OR = 7.1, 95% CI [1.3, 37.9], p = .02), but similar effects were not evident for boys (adjusted OR = 0.5, 95% CI [0.3, 1.1], p = .10). At the age of 7 years, children who had recovered from stuttering were more likely to have stronger language skills than children whose stuttering persisted (p = .05). No evident differences were identified on other outcomes including nonverbal cognition, temperament, and parent-reported quality of life.
Conclusion
Overall, findings suggested that there may be associations between language ability and recovery from stuttering. Subsequent research is needed to explore the directionality of this relationship.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0205/2657162/The-History-of-Stuttering-by-7-Years-of-Age
via IFTTT

Generalization of Perceptual Learning of Degraded Speech Across Talkers

Purpose
We investigated whether perceptual learning of noise-vocoded (NV) speech is specific to a particular talker or accent.
Method
Four groups of listeners (n = 18 per group) were first trained by listening to 20 NV sentences that had been recorded by a talker with either the same native accent as the listeners or a different regional accent. They then heard 20 novel NV sentences from either the native- or nonnative-accented talker (test), in a 2 × 2 (Training Talker per Accent × Test Talker per Accent) design.
Results
Word-report scores at test for participants trained and tested with the same (native- or nonnative-accented) talker did not differ from those for participants trained with 1 talker per accent and tested on another.
Conclusions
Learning of NV speech generalized completely between talkers. Two additional experiments confirmed this result. Thus, when listeners are trained to understand NV speech, they are not learning talker- or accent-specific features but instead are learning how to use the information available in the degraded signal. The results suggest that people with cochlear implants, who experience spectrally degraded speech, may not be too disadvantaged if they learn to understand speech through their implant by listening primarily to just 1 other talker, such as a spouse.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0300/2657169/Generalization-of-Perceptual-Learning-of-Degraded
via IFTTT

Encoding Deficits Impede Word Learning and Memory in Adults With Developmental Language Disorders

Purpose
The aim of this study was to determine whether the word-learning challenges associated with developmental language disorder (DLD) result from encoding or retention deficits.
Method
In Study 1, 59 postsecondary students with DLD and 60 with normal development (ND) took the California Verbal Learning Test–Second Edition, Adult Version (Delis, Kramer, Kaplan, & Ober, 2000). In Study 2, 23 postsecondary students with DLD and 24 with ND attempted to learn 9 novel words in each of 3 training conditions: uncued test, cued test, and no test (passive study). Retention was measured 1 day and 1 week later.
Results
By the end of training, students with DLD had encoded fewer familiar words (Study 1) and fewer novel words (Study 2) than their ND peers as evinced by word recall. They also demonstrated poorer encoding as evinced by slower growth in recall from Trials 1 to 2 (Studies 1 and 2), less semantic clustering of recalled words, and poorer recognition (Study 1). The DLD and ND groups were similar in the relative amount of information they could recall after retention periods of 5 and 20 min (Study 1). After a 1-day retention period, the DLD group recalled less information that had been encoded via passive study, but they performed as well as their ND peers when recalling information that had been encoded via tests (Study 2). Compared to passive study, encoding via tests also resulted in more robust lexical engagement after a 1-week retention for DLD and ND groups.
Conclusions
Encoding, not retention, is the problematic stage of word learning for adults with DLD. Self-testing with feedback lessens the deficit.
Supplemental Materials
http://ift.tt/2y2PoJz

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0031/2657170/Encoding-Deficits-Impede-Word-Learning-and-Memory
via IFTTT

The History of Stuttering by 7 Years of Age: Follow-Up of a Prospective Community Cohort

Purpose
For a community cohort of children confirmed to have stuttered by the age of 4 years, we report (a) the recovery rate from stuttering, (b) predictors of recovery, and (c) comorbidities at the age of 7 years.
Method
This study was nested in the Early Language in Victoria Study. Predictors of stuttering recovery included child, family, and environmental measures and first-degree relative history of stuttering. Comorbidities examined at 7 years included temperament, language, nonverbal cognition, and health-related quality of life.
Results
The recovery rate by the age of 7 years was 65%. Girls with stronger communication skills at the age of 2 years had higher odds of recovery (adjusted OR = 7.1, 95% CI [1.3, 37.9], p = .02), but similar effects were not evident for boys (adjusted OR = 0.5, 95% CI [0.3, 1.1], p = .10). At the age of 7 years, children who had recovered from stuttering were more likely to have stronger language skills than children whose stuttering persisted (p = .05). No evident differences were identified on other outcomes including nonverbal cognition, temperament, and parent-reported quality of life.
Conclusion
Overall, findings suggested that there may be associations between language ability and recovery from stuttering. Subsequent research is needed to explore the directionality of this relationship.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0205/2657162/The-History-of-Stuttering-by-7-Years-of-Age
via IFTTT

Characteristics of clinical measurements between biomechanical responders and non-responders to a shoe designed for knee osteoarthritis

elsevier-non-solus.png

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Yongwook Kim, Jim Richards, Roy H. Lidtke, Renato Trede
PurposeThe purpose of this study was to investigate the characteristics of biomechanical and clinical measurements in relation to the knee adduction moment when wearing a standard shoe and a shoe design for individuals with knee osteoarthritis (Flex-OA).MethodsKinematic and kinetic data were collected from thirty-two healthy individuals (64 knees) using a ten camera motion analysis system and four force plates. Subjects performed 5 walking trials under the two conditions and the magnitude of individuals’ biomechanical responses where explored in relation to the clinical assessment of the Foot Posture Index, hip rotation range, strength of hip rotators, and active ankle-foot motion, all of which have been described as possible compensation mechanisms in knee osteoarthritis.ResultsSignificant reductions in the first peak of the knee adduction moment (KAM) during stance phase (9.3%) were recorded (p<0.0001). However, despite this difference, 22 of 64 knees showed either no change or an increased KAM, indicating a non-response or negative-response to the Flex-OA shoe. Significant differences were observed between the responder and non-responder subgroups in the hip rotation range ratio (p=0.044) and the hip rotators strength ratio (p=0.028).ConclusionSignificant differences were seen in clinical assessments of hip rotation range and hip rotator strength between responders and non-responders using a cut-off of 0.02Nm/kg change in the KAM.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xR6udu
via IFTTT

Standing or swaying to the beat: Discrete auditory rhythms entrain stance and promote postural coordination stability

elsevier-non-solus.png

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Alexandre Coste, Robin N. Salesse, Mathieu Gueugnon, Ludovic Marin, Benoît G. Bardy
Humans seem to take social and behavioral advantages of entraining themselves with discrete auditory rhythms (e.g., dancing, communicating). We investigated the benefits of such an entrainment on posture during standing (spontaneous entrainment) and during a whole-body swaying task (intentional synchronization). We first evaluated how body sway was entrained by different auditory metronome frequencies (0.25, 0.5, and 1.0Hz). We then assessed the stabilizing role of auditory rhythms on postural control, characterized in a dynamical systems perspective by informational anchoring of the head (local stabilization) and fewer transitions from in-phase to anti-phase ankle-hip coordination (global stabilization). Our results revealed in both situations an entrainment of postural movements by external rhythms. This entrainment tended to be more effective when the metronome frequency (0.25Hz) was close to the dominant sway frequency. Particularly, we found during intentional synchronization that head movements were less variable when paced by a slower beat (informational anchoring), and that phase transitions between the two stable patterns in postural dynamics were delayed. Our findings demonstrate that human bipedal posture can be actively or spontaneously modulated by an external discrete auditory rhythm, which might be exploited for the purpose of learning and rehabilitation.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2fLwywa
via IFTTT

Characteristics of clinical measurements between biomechanical responders and non-responders to a shoe designed for knee osteoarthritis

elsevier-non-solus.png

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Yongwook Kim, Jim Richards, Roy H. Lidtke, Renato Trede
PurposeThe purpose of this study was to investigate the characteristics of biomechanical and clinical measurements in relation to the knee adduction moment when wearing a standard shoe and a shoe design for individuals with knee osteoarthritis (Flex-OA).MethodsKinematic and kinetic data were collected from thirty-two healthy individuals (64 knees) using a ten camera motion analysis system and four force plates. Subjects performed 5 walking trials under the two conditions and the magnitude of individuals’ biomechanical responses where explored in relation to the clinical assessment of the Foot Posture Index, hip rotation range, strength of hip rotators, and active ankle-foot motion, all of which have been described as possible compensation mechanisms in knee osteoarthritis.ResultsSignificant reductions in the first peak of the knee adduction moment (KAM) during stance phase (9.3%) were recorded (p<0.0001). However, despite this difference, 22 of 64 knees showed either no change or an increased KAM, indicating a non-response or negative-response to the Flex-OA shoe. Significant differences were observed between the responder and non-responder subgroups in the hip rotation range ratio (p=0.044) and the hip rotators strength ratio (p=0.028).ConclusionSignificant differences were seen in clinical assessments of hip rotation range and hip rotator strength between responders and non-responders using a cut-off of 0.02Nm/kg change in the KAM.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xR6udu
via IFTTT

Standing or swaying to the beat: Discrete auditory rhythms entrain stance and promote postural coordination stability

elsevier-non-solus.png

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Alexandre Coste, Robin N. Salesse, Mathieu Gueugnon, Ludovic Marin, Benoît G. Bardy
Humans seem to take social and behavioral advantages of entraining themselves with discrete auditory rhythms (e.g., dancing, communicating). We investigated the benefits of such an entrainment on posture during standing (spontaneous entrainment) and during a whole-body swaying task (intentional synchronization). We first evaluated how body sway was entrained by different auditory metronome frequencies (0.25, 0.5, and 1.0Hz). We then assessed the stabilizing role of auditory rhythms on postural control, characterized in a dynamical systems perspective by informational anchoring of the head (local stabilization) and fewer transitions from in-phase to anti-phase ankle-hip coordination (global stabilization). Our results revealed in both situations an entrainment of postural movements by external rhythms. This entrainment tended to be more effective when the metronome frequency (0.25Hz) was close to the dominant sway frequency. Particularly, we found during intentional synchronization that head movements were less variable when paced by a slower beat (informational anchoring), and that phase transitions between the two stable patterns in postural dynamics were delayed. Our findings demonstrate that human bipedal posture can be actively or spontaneously modulated by an external discrete auditory rhythm, which might be exploited for the purpose of learning and rehabilitation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fLwywa
via IFTTT

Characteristics of clinical measurements between biomechanical responders and non-responders to a shoe designed for knee osteoarthritis

elsevier-non-solus.png

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Yongwook Kim, Jim Richards, Roy H. Lidtke, Renato Trede
PurposeThe purpose of this study was to investigate the characteristics of biomechanical and clinical measurements in relation to the knee adduction moment when wearing a standard shoe and a shoe design for individuals with knee osteoarthritis (Flex-OA).MethodsKinematic and kinetic data were collected from thirty-two healthy individuals (64 knees) using a ten camera motion analysis system and four force plates. Subjects performed 5 walking trials under the two conditions and the magnitude of individuals’ biomechanical responses where explored in relation to the clinical assessment of the Foot Posture Index, hip rotation range, strength of hip rotators, and active ankle-foot motion, all of which have been described as possible compensation mechanisms in knee osteoarthritis.ResultsSignificant reductions in the first peak of the knee adduction moment (KAM) during stance phase (9.3%) were recorded (p<0.0001). However, despite this difference, 22 of 64 knees showed either no change or an increased KAM, indicating a non-response or negative-response to the Flex-OA shoe. Significant differences were observed between the responder and non-responder subgroups in the hip rotation range ratio (p=0.044) and the hip rotators strength ratio (p=0.028).ConclusionSignificant differences were seen in clinical assessments of hip rotation range and hip rotator strength between responders and non-responders using a cut-off of 0.02Nm/kg change in the KAM.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xR6udu
via IFTTT

Standing or swaying to the beat: Discrete auditory rhythms entrain stance and promote postural coordination stability

elsevier-non-solus.png

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Alexandre Coste, Robin N. Salesse, Mathieu Gueugnon, Ludovic Marin, Benoît G. Bardy
Humans seem to take social and behavioral advantages of entraining themselves with discrete auditory rhythms (e.g., dancing, communicating). We investigated the benefits of such an entrainment on posture during standing (spontaneous entrainment) and during a whole-body swaying task (intentional synchronization). We first evaluated how body sway was entrained by different auditory metronome frequencies (0.25, 0.5, and 1.0Hz). We then assessed the stabilizing role of auditory rhythms on postural control, characterized in a dynamical systems perspective by informational anchoring of the head (local stabilization) and fewer transitions from in-phase to anti-phase ankle-hip coordination (global stabilization). Our results revealed in both situations an entrainment of postural movements by external rhythms. This entrainment tended to be more effective when the metronome frequency (0.25Hz) was close to the dominant sway frequency. Particularly, we found during intentional synchronization that head movements were less variable when paced by a slower beat (informational anchoring), and that phase transitions between the two stable patterns in postural dynamics were delayed. Our findings demonstrate that human bipedal posture can be actively or spontaneously modulated by an external discrete auditory rhythm, which might be exploited for the purpose of learning and rehabilitation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fLwywa
via IFTTT

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children.

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children.

J Am Acad Audiol. 2017 Oct;28(9):861-875

Authors: McCreery RW, Brennan M, Walker EA, Spratford M

Abstract
BACKGROUND: The purpose of providing amplification for children with hearing loss is to make speech audible across a range of frequencies and intensities. Children with hearing aids (HAs) that closely approximate prescriptive targets have better audibility than peers with HA output below prescriptive targets. Poor aided audibility puts children with hearing loss at risk for delays in communication, social, and academic development.
PURPOSE: The goals of this study were to determine how well HAs match prescriptive targets across ranges of frequency and intensity of speech and to determine how level- and frequency-dependent deviations from prescriptive target affect speech recognition in quiet and in background noise.
STUDY SAMPLE: One-hundred sixty-six children with permanent mild to severe hearing loss who were between 6 months and 8 years of age and who wore HAs participated in the study.
DATA COLLECTION AND ANALYSIS: Hearing aid verification and speech recognition data were collected as part of a longitudinal study of communication development in children with HAs. Hearing aid output at levels of soft and average speech and maximum power output were compared with each child's prescriptive targets. The deviations from prescriptive target were quantified based on the root-mean-square (RMS) error and absolute deviation from target for octave frequencies. Children were classified into groups based on the number of level-dependent deviations from prescriptive target. Frequency-specific deviations from prescriptive target and sensation levels (SLs) were used to estimate the proximity of fittings across the frequency range. Lexical Neighborhood Test (LNT) word recognition in quiet and Computer-Assisted Speech Perception Assessment (CASPA) phoneme recognition in noise were compared across level-dependent error groups and as a function of SL at 4 kHz.
RESULTS: Children who had deviations from prescriptive target at all three input levels had poorer LNT word recognition in quiet than children who had fittings that matched prescriptive target within 5 dB RMS at all three input levels. Children with lower 4 kHz SLs through their HAs had poorer LNT recognition in quiet and CASPA phoneme recognition in noise than children with higher aided SLs.
CONCLUSIONS: Children with HAs fitted to provide audibility for speech across a range of inputs and frequencies had better speech recognition outcomes than peers with HAs that were not optimally fitted to prescriptive targets.

PMID: 28972473 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gaTJRd
via IFTTT

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions.

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions.

J Am Acad Audiol. 2017 Oct;28(9):838-860

Authors: Keefe DH, Archer KL, Schmid KK, Fitzpatrick DF, Feeney MP, Hunter LL

Abstract
BACKGROUND: Otosclerosis is a progressive middle-ear disease that affects conductive transmission through the middle ear. Ear-canal acoustic tests may be useful in the diagnosis of conductive disorders. This study addressed the degree to which results from a battery of ear-canal tests, which include wideband reflectance, acoustic stapedius muscle reflex threshold (ASRT), and transient evoked otoacoustic emissions (TEOAEs), were effective in quantifying a risk of otosclerosis and in evaluating middle-ear function in ears after surgical intervention for otosclerosis.
PURPOSE: To evaluate the ability of the test battery to classify ears as normal or otosclerotic, measure the accuracy of reflectance in classifying ears as normal or otosclerotic, and evaluate the similarity of responses in normal ears compared with ears after surgical intervention for otosclerosis.
RESEARCH DESIGN: A quasi-experimental cross-sectional study incorporating case control was used. Three groups were studied: one diagnosed with otosclerosis before corrective surgery, a group that received corrective surgery for otosclerosis, and a control group.
STUDY SAMPLE: The test groups included 23 ears (13 right and 10 left) with normal hearing from 16 participants (4 male and 12 female), 12 ears (7 right and 5 left) diagnosed with otosclerosis from 9 participants (3 male and 6 female), and 13 ears (4 right and 9 left) after surgical intervention from 10 participants (2 male and 8 female).
DATA COLLECTION AND ANALYSIS: Participants received audiometric evaluations and clinical immittance testing. Experimental tests performed included ASRT tests with wideband reference signal (0.25-8 kHz), reflectance tests (0.25-8 kHz), which were parameterized by absorbance and group delay at ambient pressure and at swept tympanometric pressures, and TEOAE tests using chirp stimuli (1-8 kHz). ASRTs were measured in ipsilateral and contralateral conditions using tonal and broadband noise activators. Experimental ASRT tests were based on the difference in wideband-absorbed sound power before and after presenting the activator. Diagnostic accuracy to classify ears as otosclerotic or normal was quantified by the area under the receiver operating characteristic curve (AUC) for univariate and multivariate reflectance tests. The multivariate predictor used a small number of input reflectance variables, each having a large AUC, in a principal components analysis to create independent variables and followed by a logistic regression procedure to classify the test ears.
RESULTS: Relative to the results in normal ears, diagnosed otosclerosis ears more frequently showed absent TEOAEs and ASRTs, reduced ambient absorbance at 4 kHz, and a different pattern of tympanometric absorbance and group delay (absorbance increased at 2.8 kHz at the positive-pressure tail and decreased at 0.7-1 kHz at the peak pressure, whereas group delay decreased at positive and negative-pressure tails from 0.35-0.7 kHz, and at 2.8-4 kHz at positive-pressure tail). Using a multivariate predictor with three reflectance variables, tympanometric reflectance (AUC = 0.95) was more accurate than ambient reflectance (AUC = 0.88) in classifying ears as normal or otosclerotic.
CONCLUSIONS: Reflectance provides a middle-ear test that is sensitive to classifying ears as otosclerotic or normal, which may be useful in clinical applications.

PMID: 28972472 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYmeLd
via IFTTT

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss.

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss.

J Am Acad Audiol. 2017 Oct;28(9):823-837

Authors: Brennan MA, Lewis D, McCreery R, Kopun J, Alexander JM

Abstract
BACKGROUND: Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL).
PURPOSE: To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL.
RESEARCH DESIGN: Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification.
STUDY SAMPLE: Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL.
INTERVENTION: Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure.
DATA COLLECTION AND ANALYSIS: Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT.
RESULTS: Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age.
CONCLUSION: Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.

PMID: 28972471 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gaTGF1
via IFTTT

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique.

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique.

J Am Acad Audiol. 2017 Oct;28(9):810-822

Authors: Kirby BJ, Kopun JG, Spratford M, Mollak CM, Brennan MA, McCreery RW

Abstract
BACKGROUND: Sloping hearing loss imposes limits on audibility for high-frequency sounds in many hearing aid users. Signal processing algorithms that shift high-frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high-frequency sounds.
PURPOSE: This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency-lowering signal processing strategy called frequency composition (FC) in adults and children.
RESEARCH DESIGN: Participants wore the study hearing aids in two signal processing conditions (conventional processing versus FC) at an initial laboratory visit and subsequently at home during two approximately six-week long trials, with the order of conditions counterbalanced across individuals in a double-blind paradigm.
STUDY SAMPLE: Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss who were full-time hearing aid users.
DATA COLLECTION AND ANALYSES: Individual performance with each type of processing was assessed using speech perception tasks, a measure of listening effort, and subjective sound quality surveys at an initial visit. At the conclusion of each subsequent at-home trial, participants were retested in the laboratory. Linear mixed effects analyses were completed for each outcome measure with signal processing condition, age group, visit (prehome versus posthome trial), and measures of aided audibility as predictors.
RESULTS: Overall, there were few significant differences in speech perception, listening effort, or subjective sound quality between FC and conventional processing, effects of listener age, or longitudinal changes in performance. Listeners preferred FC to conventional processing on one of six subjective sound quality metrics. Better speech perception performance was consistently related to higher aided audibility.
CONCLUSIONS: These results indicate that when high-frequency speech sounds are made audible with conventional processing, speech recognition ability and listening effort are similar between conventional processing and FC. Despite the lack of benefit to speech perception, some listeners still preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.

PMID: 28972470 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYgvVx
via IFTTT

Relationship of Grammatical Context on Children's Recognition of s/z-Inflected Words.

Relationship of Grammatical Context on Children's Recognition of s/z-Inflected Words.

J Am Acad Audiol. 2017 Oct;28(9):799-809

Authors: Spratford M, McLean HH, McCreery R

Abstract
BACKGROUND: Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.
PURPOSE: To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).
RESEARCH DESIGN: A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.
STUDY SAMPLE: Thirty-five children, aged 5-12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.
DATA COLLECTION AND ANALYSIS: Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.
RESULTS: When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.
CONCLUSIONS: High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children's use of high-frequency audibility in a manner that approximates how they learn language.

PMID: 28972469 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gakGEy
via IFTTT

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants.

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants.

J Am Acad Audiol. 2017 Oct;28(9):786-798

Authors: Spitzer ER, Hughes ML

Abstract
BACKGROUND: Contemporary cochlear implants (CIs) use cathodic-leading, symmetrical, biphasic current pulses, despite a growing body of evidence that suggests anodic-leading pulses may be more effective at stimulating the auditory system. However, since much of this research on humans has used pseudomonophasic pulses or biphasic pulses with unusually long interphase gaps, the effects of stimulus polarity are unclear for clinically relevant (i.e., symmetric biphasic) stimuli.
PURPOSE: The purpose of this study was to examine the effects of stimulus polarity on basic characteristics of physiological spread-of-excitation (SOE) measures obtained with the electrically evoked compound action potential (ECAP) in CI recipients using clinically relevant stimuli.
RESEARCH DESIGN: Using a within-subjects (repeated measures) design, we examined the differences in mean amplitude, peak electrode location, area under the curve, and spatial separation between SOE curves obtained with anodic- and cathodic-leading symmetrical, biphasic pulses.
STUDY SAMPLE: Fifteen CI recipients (ages 13-77) participated in this study. All were users of Cochlear Ltd. devices.
DATA COLLECTION AND ANALYSIS: SOE functions were obtained using the standard forward-masking artifact reduction method. Probe electrodes were 5-18, and they were stimulated at an 8 (of 10) loudness rating ("loud"). Outcome measures (mean amplitude, peak electrode location, curve area, and spatial separation) for each polarity were compared within subjects.
RESULTS: Anodic-leading current pulses produced ECAPs with larger average amplitudes, greater curve area, and less spatial separation between SOE patterns compared with that for cathodic-leading pulses. There was no effect of polarity on peak electrode location.
CONCLUSIONS: These results indicate that for equal current levels, the anodic-leading polarity produces broader excitation patterns compared with cathodic-leading pulses, which reduces the spatial separation between functions. This result is likely due to preferential stimulation of the central axon. Further research is needed to determine whether SOE patterns obtained with anodic-leading pulses better predict pitch discrimination.

PMID: 28972468 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYk1zk
via IFTTT

Effects of Device on Video Head Impulse Test (vHIT) Gain.

Effects of Device on Video Head Impulse Test (vHIT) Gain.

J Am Acad Audiol. 2017 Oct;28(9):778-785

Authors: Janky KL, Patterson JN, Shepard NT, Thomas MLA, Honaker JA

Abstract
BACKGROUND: Numerous video head impulse test (vHIT) devices are available commercially; however, gain is not calculated uniformly. An evaluation of these devices/algorithms in healthy controls and patients with vestibular loss is necessary for comparing and synthesizing work that utilizes different devices and gain calculations.
PURPOSE: Using three commercially available vHIT devices/algorithms, the purpose of the present study was to compare: (1) horizontal canal vHIT gain among devices/algorithms in normal control subjects; (2) the effects of age on vHIT gain for each device/algorithm in normal control subjects; and (3) the clinical performance of horizontal canal vHIT gain between devices/algorithms for differentiating normal versus abnormal vestibular function.
RESEARCH DESIGN: Prospective.
STUDY SAMPLE: Sixty-one normal control adult subjects (range 20-78) and eleven adults with unilateral or bilateral vestibular loss (range 32-79).
DATA COLLECTION AND ANALYSIS: vHIT was administered using three different devices/algorithms, randomized in order, for each subject on the same day: (1) Impulse (Otometrics, Schaumberg, IL; monocular eye recording, right eye only; using area under the curve gain), (2) EyeSeeCam (Interacoustics, Denmark; monocular eye recording, left eye only; using instantaneous gain), and (3) VisualEyes (MicroMedical, Chatham, IL, binocular eye recording; using position gain).
RESULTS: There was a significant mean difference in vHIT gain among devices/algorithms for both the normal control and vestibular loss groups. vHIT gain was significantly larger in the ipsilateral direction of the eye used to measure gain; however, in spite of the significant mean differences in vHIT gain among devices/algorithms and the significant directional bias, classification of "normal" versus "abnormal" gain is consistent across all compared devices/algorithms, with the exception of instantaneous gain at 40 msec. There was not an effect of age on vHIT gain up to 78 years regardless of the device/algorithm.
CONCLUSIONS: These findings support that vHIT gain is significantly different between devices/algorithms, suggesting that care should be taken when making direct comparisons of absolute gain values between devices/algorithms.

PMID: 28972467 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gaTCFh
via IFTTT

Boys Town National Research Hospital: Past, Present, and Future.

Boys Town National Research Hospital: Past, Present, and Future.

J Am Acad Audiol. 2017 Oct;28(9):776-777

Authors: Janky K, McCreery R, Jesteadt W

PMID: 28972466 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYeRmS
via IFTTT

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children.

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children.

J Am Acad Audiol. 2017 Oct;28(9):861-875

Authors: McCreery RW, Brennan M, Walker EA, Spratford M

Abstract
BACKGROUND: The purpose of providing amplification for children with hearing loss is to make speech audible across a range of frequencies and intensities. Children with hearing aids (HAs) that closely approximate prescriptive targets have better audibility than peers with HA output below prescriptive targets. Poor aided audibility puts children with hearing loss at risk for delays in communication, social, and academic development.
PURPOSE: The goals of this study were to determine how well HAs match prescriptive targets across ranges of frequency and intensity of speech and to determine how level- and frequency-dependent deviations from prescriptive target affect speech recognition in quiet and in background noise.
STUDY SAMPLE: One-hundred sixty-six children with permanent mild to severe hearing loss who were between 6 months and 8 years of age and who wore HAs participated in the study.
DATA COLLECTION AND ANALYSIS: Hearing aid verification and speech recognition data were collected as part of a longitudinal study of communication development in children with HAs. Hearing aid output at levels of soft and average speech and maximum power output were compared with each child's prescriptive targets. The deviations from prescriptive target were quantified based on the root-mean-square (RMS) error and absolute deviation from target for octave frequencies. Children were classified into groups based on the number of level-dependent deviations from prescriptive target. Frequency-specific deviations from prescriptive target and sensation levels (SLs) were used to estimate the proximity of fittings across the frequency range. Lexical Neighborhood Test (LNT) word recognition in quiet and Computer-Assisted Speech Perception Assessment (CASPA) phoneme recognition in noise were compared across level-dependent error groups and as a function of SL at 4 kHz.
RESULTS: Children who had deviations from prescriptive target at all three input levels had poorer LNT word recognition in quiet than children who had fittings that matched prescriptive target within 5 dB RMS at all three input levels. Children with lower 4 kHz SLs through their HAs had poorer LNT recognition in quiet and CASPA phoneme recognition in noise than children with higher aided SLs.
CONCLUSIONS: Children with HAs fitted to provide audibility for speech across a range of inputs and frequencies had better speech recognition outcomes than peers with HAs that were not optimally fitted to prescriptive targets.

PMID: 28972473 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gaTJRd
via IFTTT

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions.

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions.

J Am Acad Audiol. 2017 Oct;28(9):838-860

Authors: Keefe DH, Archer KL, Schmid KK, Fitzpatrick DF, Feeney MP, Hunter LL

Abstract
BACKGROUND: Otosclerosis is a progressive middle-ear disease that affects conductive transmission through the middle ear. Ear-canal acoustic tests may be useful in the diagnosis of conductive disorders. This study addressed the degree to which results from a battery of ear-canal tests, which include wideband reflectance, acoustic stapedius muscle reflex threshold (ASRT), and transient evoked otoacoustic emissions (TEOAEs), were effective in quantifying a risk of otosclerosis and in evaluating middle-ear function in ears after surgical intervention for otosclerosis.
PURPOSE: To evaluate the ability of the test battery to classify ears as normal or otosclerotic, measure the accuracy of reflectance in classifying ears as normal or otosclerotic, and evaluate the similarity of responses in normal ears compared with ears after surgical intervention for otosclerosis.
RESEARCH DESIGN: A quasi-experimental cross-sectional study incorporating case control was used. Three groups were studied: one diagnosed with otosclerosis before corrective surgery, a group that received corrective surgery for otosclerosis, and a control group.
STUDY SAMPLE: The test groups included 23 ears (13 right and 10 left) with normal hearing from 16 participants (4 male and 12 female), 12 ears (7 right and 5 left) diagnosed with otosclerosis from 9 participants (3 male and 6 female), and 13 ears (4 right and 9 left) after surgical intervention from 10 participants (2 male and 8 female).
DATA COLLECTION AND ANALYSIS: Participants received audiometric evaluations and clinical immittance testing. Experimental tests performed included ASRT tests with wideband reference signal (0.25-8 kHz), reflectance tests (0.25-8 kHz), which were parameterized by absorbance and group delay at ambient pressure and at swept tympanometric pressures, and TEOAE tests using chirp stimuli (1-8 kHz). ASRTs were measured in ipsilateral and contralateral conditions using tonal and broadband noise activators. Experimental ASRT tests were based on the difference in wideband-absorbed sound power before and after presenting the activator. Diagnostic accuracy to classify ears as otosclerotic or normal was quantified by the area under the receiver operating characteristic curve (AUC) for univariate and multivariate reflectance tests. The multivariate predictor used a small number of input reflectance variables, each having a large AUC, in a principal components analysis to create independent variables and followed by a logistic regression procedure to classify the test ears.
RESULTS: Relative to the results in normal ears, diagnosed otosclerosis ears more frequently showed absent TEOAEs and ASRTs, reduced ambient absorbance at 4 kHz, and a different pattern of tympanometric absorbance and group delay (absorbance increased at 2.8 kHz at the positive-pressure tail and decreased at 0.7-1 kHz at the peak pressure, whereas group delay decreased at positive and negative-pressure tails from 0.35-0.7 kHz, and at 2.8-4 kHz at positive-pressure tail). Using a multivariate predictor with three reflectance variables, tympanometric reflectance (AUC = 0.95) was more accurate than ambient reflectance (AUC = 0.88) in classifying ears as normal or otosclerotic.
CONCLUSIONS: Reflectance provides a middle-ear test that is sensitive to classifying ears as otosclerotic or normal, which may be useful in clinical applications.

PMID: 28972472 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYmeLd
via IFTTT

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss.

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss.

J Am Acad Audiol. 2017 Oct;28(9):823-837

Authors: Brennan MA, Lewis D, McCreery R, Kopun J, Alexander JM

Abstract
BACKGROUND: Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL).
PURPOSE: To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL.
RESEARCH DESIGN: Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification.
STUDY SAMPLE: Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL.
INTERVENTION: Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure.
DATA COLLECTION AND ANALYSIS: Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT.
RESULTS: Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age.
CONCLUSION: Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.

PMID: 28972471 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gaTGF1
via IFTTT

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique.

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique.

J Am Acad Audiol. 2017 Oct;28(9):810-822

Authors: Kirby BJ, Kopun JG, Spratford M, Mollak CM, Brennan MA, McCreery RW

Abstract
BACKGROUND: Sloping hearing loss imposes limits on audibility for high-frequency sounds in many hearing aid users. Signal processing algorithms that shift high-frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high-frequency sounds.
PURPOSE: This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency-lowering signal processing strategy called frequency composition (FC) in adults and children.
RESEARCH DESIGN: Participants wore the study hearing aids in two signal processing conditions (conventional processing versus FC) at an initial laboratory visit and subsequently at home during two approximately six-week long trials, with the order of conditions counterbalanced across individuals in a double-blind paradigm.
STUDY SAMPLE: Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss who were full-time hearing aid users.
DATA COLLECTION AND ANALYSES: Individual performance with each type of processing was assessed using speech perception tasks, a measure of listening effort, and subjective sound quality surveys at an initial visit. At the conclusion of each subsequent at-home trial, participants were retested in the laboratory. Linear mixed effects analyses were completed for each outcome measure with signal processing condition, age group, visit (prehome versus posthome trial), and measures of aided audibility as predictors.
RESULTS: Overall, there were few significant differences in speech perception, listening effort, or subjective sound quality between FC and conventional processing, effects of listener age, or longitudinal changes in performance. Listeners preferred FC to conventional processing on one of six subjective sound quality metrics. Better speech perception performance was consistently related to higher aided audibility.
CONCLUSIONS: These results indicate that when high-frequency speech sounds are made audible with conventional processing, speech recognition ability and listening effort are similar between conventional processing and FC. Despite the lack of benefit to speech perception, some listeners still preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.

PMID: 28972470 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYgvVx
via IFTTT

Relationship of Grammatical Context on Children's Recognition of s/z-Inflected Words.

Relationship of Grammatical Context on Children's Recognition of s/z-Inflected Words.

J Am Acad Audiol. 2017 Oct;28(9):799-809

Authors: Spratford M, McLean HH, McCreery R

Abstract
BACKGROUND: Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.
PURPOSE: To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).
RESEARCH DESIGN: A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.
STUDY SAMPLE: Thirty-five children, aged 5-12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.
DATA COLLECTION AND ANALYSIS: Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.
RESULTS: When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.
CONCLUSIONS: High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children's use of high-frequency audibility in a manner that approximates how they learn language.

PMID: 28972469 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gakGEy
via IFTTT

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants.

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants.

J Am Acad Audiol. 2017 Oct;28(9):786-798

Authors: Spitzer ER, Hughes ML

Abstract
BACKGROUND: Contemporary cochlear implants (CIs) use cathodic-leading, symmetrical, biphasic current pulses, despite a growing body of evidence that suggests anodic-leading pulses may be more effective at stimulating the auditory system. However, since much of this research on humans has used pseudomonophasic pulses or biphasic pulses with unusually long interphase gaps, the effects of stimulus polarity are unclear for clinically relevant (i.e., symmetric biphasic) stimuli.
PURPOSE: The purpose of this study was to examine the effects of stimulus polarity on basic characteristics of physiological spread-of-excitation (SOE) measures obtained with the electrically evoked compound action potential (ECAP) in CI recipients using clinically relevant stimuli.
RESEARCH DESIGN: Using a within-subjects (repeated measures) design, we examined the differences in mean amplitude, peak electrode location, area under the curve, and spatial separation between SOE curves obtained with anodic- and cathodic-leading symmetrical, biphasic pulses.
STUDY SAMPLE: Fifteen CI recipients (ages 13-77) participated in this study. All were users of Cochlear Ltd. devices.
DATA COLLECTION AND ANALYSIS: SOE functions were obtained using the standard forward-masking artifact reduction method. Probe electrodes were 5-18, and they were stimulated at an 8 (of 10) loudness rating ("loud"). Outcome measures (mean amplitude, peak electrode location, curve area, and spatial separation) for each polarity were compared within subjects.
RESULTS: Anodic-leading current pulses produced ECAPs with larger average amplitudes, greater curve area, and less spatial separation between SOE patterns compared with that for cathodic-leading pulses. There was no effect of polarity on peak electrode location.
CONCLUSIONS: These results indicate that for equal current levels, the anodic-leading polarity produces broader excitation patterns compared with cathodic-leading pulses, which reduces the spatial separation between functions. This result is likely due to preferential stimulation of the central axon. Further research is needed to determine whether SOE patterns obtained with anodic-leading pulses better predict pitch discrimination.

PMID: 28972468 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYk1zk
via IFTTT

Effects of Device on Video Head Impulse Test (vHIT) Gain.

Effects of Device on Video Head Impulse Test (vHIT) Gain.

J Am Acad Audiol. 2017 Oct;28(9):778-785

Authors: Janky KL, Patterson JN, Shepard NT, Thomas MLA, Honaker JA

Abstract
BACKGROUND: Numerous video head impulse test (vHIT) devices are available commercially; however, gain is not calculated uniformly. An evaluation of these devices/algorithms in healthy controls and patients with vestibular loss is necessary for comparing and synthesizing work that utilizes different devices and gain calculations.
PURPOSE: Using three commercially available vHIT devices/algorithms, the purpose of the present study was to compare: (1) horizontal canal vHIT gain among devices/algorithms in normal control subjects; (2) the effects of age on vHIT gain for each device/algorithm in normal control subjects; and (3) the clinical performance of horizontal canal vHIT gain between devices/algorithms for differentiating normal versus abnormal vestibular function.
RESEARCH DESIGN: Prospective.
STUDY SAMPLE: Sixty-one normal control adult subjects (range 20-78) and eleven adults with unilateral or bilateral vestibular loss (range 32-79).
DATA COLLECTION AND ANALYSIS: vHIT was administered using three different devices/algorithms, randomized in order, for each subject on the same day: (1) Impulse (Otometrics, Schaumberg, IL; monocular eye recording, right eye only; using area under the curve gain), (2) EyeSeeCam (Interacoustics, Denmark; monocular eye recording, left eye only; using instantaneous gain), and (3) VisualEyes (MicroMedical, Chatham, IL, binocular eye recording; using position gain).
RESULTS: There was a significant mean difference in vHIT gain among devices/algorithms for both the normal control and vestibular loss groups. vHIT gain was significantly larger in the ipsilateral direction of the eye used to measure gain; however, in spite of the significant mean differences in vHIT gain among devices/algorithms and the significant directional bias, classification of "normal" versus "abnormal" gain is consistent across all compared devices/algorithms, with the exception of instantaneous gain at 40 msec. There was not an effect of age on vHIT gain up to 78 years regardless of the device/algorithm.
CONCLUSIONS: These findings support that vHIT gain is significantly different between devices/algorithms, suggesting that care should be taken when making direct comparisons of absolute gain values between devices/algorithms.

PMID: 28972467 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2gaTCFh
via IFTTT

Boys Town National Research Hospital: Past, Present, and Future.

Boys Town National Research Hospital: Past, Present, and Future.

J Am Acad Audiol. 2017 Oct;28(9):776-777

Authors: Janky K, McCreery R, Jesteadt W

PMID: 28972466 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2xYeRmS
via IFTTT

Contemporary Commercial Music Singing Students—Voice Quality and Vocal Function at the Beginning of Singing Training

alertIcon.gif

Publication date: Available online 3 October 2017
Source:Journal of Voice
Author(s): Ewelina M. Sielska-Badurek, Maria Sobol, Katarzyna Olszowska, Kazimierz Niemczyk
ObjectiveThe purpose of this study was to assess the voice quality and the vocal tract function in popular singing students at the beginning of their singing training at the High School of Music.DesignThis is a retrospective cross-sectional study.MethodsThe study consisted of 45 popular singing students (35 females and 10 males, mean age: 19.9 ± 2.8 years). They were assessed in the first 2 months of their 4-year singing training at the High School of Music, between 2013 and 2016. Voice quality and vocal tract function were evaluated using videolaryngostroboscopy, palpation of the vocal tract structures, the perceptual speaking and singing voice assessment, acoustic analysis, maximal phonation time, the Voice Handicap Index, and the Singing Voice Handicap Index (SVHI).ResultsTwenty-two percent of Contemporary Commercial Music singing students began their education in the High School, with vocal nodules. Palpation of the vocal tract structure showed in 50% correct motions and tension in speaking and in 39.3% in singing. Perceptual voice assessment showed in 80% proper speaking voice quality and in 82.4% proper singing voice quality. The mean vocal fundamental frequency while speaking in females was 214 Hz and in males was 116 Hz. Dysphonia Severity Index was at the level of 2, and maximum phonation time was 17.7 seconds. The Voice Handicap Index and the SVHI remained within the normal range: 7.5 and 19, respectively. Perceptual singing voice assessment correlated with the SVHI (P = 0.006).ConclusionsTwenty-two percent of the Contemporary Commercial Music singing students began their education in the High School, with organic vocal fold lesions.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2hKHCxz
via IFTTT

Vocal Evaluation of Children with Congenital Hypothyroidism

alertIcon.gif

Publication date: Available online 3 October 2017
Source:Journal of Voice
Author(s): Ana Paula Dassie-Leite, Mara Behlau, Suzana Nesi-França, Monica Nunes Lima, Luiz de Lacerda
ObjectiveTo evaluate the vocal characteristics of a group of children with congenital hypothyroidism (CH) and the association of these characteristics with the children's clinical, laboratory, and therapeutic profiles.Matherial and MethodsObservational, analytical, cross-sectional study including 200 prepubertal children, of whom 100 had CH (study group [SG]) and 100 had no CH (control group [CG]). The following parameters were evaluated: 1) history (identification, complaints, and interfering variables), 2) auditory-perceptual and acoustic evaluation (samples analyzed by a group of specialists, and objectively by a computer program), 3) self-assessment scores in the Pediatric Voice-Related Quality-of-Life (PVRQoL) survey, 4) laryngological evaluation (presence or absence of laryngeal lesions and data regarding glottal closure), and 5) medical records (CH etiology, age at treatment initiation, disease severity at diagnosis, treatment quality, and thyroid function tests on the day of the examination).ResultsIn the perceptual assessment, 62.6% of the SG children passed, whereas 37.4% failed in the voice screening, but these results were comparable with those in the CG (P = 0.45). Both groups had mean/median acoustic measurements within the normal limits. The mean PVRQoL in the SG (99.3 ± 2.4) and CG (99.5 ± 1.7) were comparable (P = 1.00). Both SG (16.7%) and CG (15%) presented vocal cord lesions (P = 1.00). There was no association between voice/larynx characteristics and endocrinological data.ConclusionPrepubescent children diagnosed with CH during neonatal screening and who have a lifelong history of adequate treatment of CH showed similar vocal and laryngeal characteristics compared with children without CH.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xUy47q
via IFTTT