Πέμπτη 31 Δεκεμβρίου 2015

Simultaneous Assessment of Speech Identification and Spatial Discrimination: A Potential Testing Approach for Bilateral Cochlear Implant Users?

With increasing numbers of children and adults receiving bilateral cochlear implants, there is an urgent need for assessment tools that enable testing of binaural hearing abilities. Current test batteries are either limited in scope or are of an impractical duration for routine testing. Here, we report a behavioral test that enables combined testing of speech identification and spatial discrimination in noise. In this task, multitalker babble was presented from all speakers, and pairs of speech tokens were sequentially presented from two adjacent speakers. Listeners were required to identify both words from a closed set of four possibilities and to determine whether the second token was presented to the left or right of the first. In Experiment 1, normal-hearing adult listeners were tested at 15° intervals throughout the frontal hemifield. Listeners showed highest spatial discrimination performance in and around the frontal midline, with a decline at more eccentric locations. In contrast, speech identification abilities were least accurate near the midline and showed an improvement in performance at more lateral locations. In Experiment 2, normal-hearing listeners were assessed using a restricted range of speaker locations designed to match those found in clinical testing environments. Here, speakers were separated by 15° around the midline and 30° at more lateral locations. This resulted in a similar pattern of behavioral results as in Experiment 1. We conclude, this test offers the potential to assess both spatial discrimination and the ability to use spatial information for unmasking in clinical populations.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UjepkZ
via IFTTT

Advancing Binaural Cochlear Implant Technology

This special issue contains a collection of 13 papers highlighting the collaborative research and engineering project entitled Advancing Binaural Cochlear Implant Technology—ABCIT—as well as research spin-offs from the project. In this introductory editorial, a brief history of the project is provided, alongside an overview of the studies.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YSyYvg
via IFTTT

Reducing Current Spread by Use of a Novel Pulse Shape for Electrical Stimulation of the Auditory Nerve

Improving the electrode-neuron interface to reduce current spread between individual electrodes has been identified as one of the main objectives in the search for future improvements in cochlear-implant performance. Here, we address this problem by presenting a novel stimulation strategy that takes account of the biophysical properties of the auditory neurons (spiral ganglion neurons, SGNs) stimulated in electrical hearing. This new strategy employs a ramped pulse shape, where the maximum amplitude is achieved through a linear slope in the injected current. We present the theoretical framework that supports this new strategy and that suggests it will improve the modulation of SGNs’ activity by exploiting their sensitivity to the rising slope of current pulses. The theoretical consequence of this sensitivity to the slope is a reduction in the spread of excitation within the cochlea and, consequently, an increase in the neural dynamic range. To explore the impact of the novel stimulation method on neural activity, we performed in vitro recordings of SGNs in culture. We show that the stimulus efficacy required to evoke action potentials in SGNs falls as the stimulus slope decreases. This work lays the foundation for a novel, and more biomimetic, stimulation strategy with considerable potential for implementation in cochlear-implant technology.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YSz06x
via IFTTT

Sparse Nonnegative Matrix Factorization Strategy for Cochlear Implants

Current cochlear implant (CI) strategies carry speech information via the waveform envelope in frequency subbands. CIs require efficient speech processing to maximize information transfer to the brain, especially in background noise, where the speech envelope is not robust to noise interference. In such conditions, the envelope, after decomposition into frequency bands, may be enhanced by sparse transformations, such as nonnegative matrix factorization (NMF). Here, a novel CI processing algorithm is described, which works by applying NMF to the envelope matrix (envelopogram) of 22 frequency channels in order to improve performance in noisy environments. It is evaluated for speech in eight-talker babble noise. The critical sparsity constraint parameter was first tuned using objective measures and then evaluated with subjective speech perception experiments for both normal hearing and CI subjects. Results from vocoder simulations with 10 normal hearing subjects showed that the algorithm significantly enhances speech intelligibility with the selected sparsity constraints. Results from eight CI subjects showed no significant overall improvement compared with the standard advanced combination encoder algorithm, but a trend toward improvement of word identification of about 10 percentage points at +15 dB signal-to-noise ratio (SNR) was observed in the eight CI subjects. Additionally, a considerable reduction of the spread of speech perception performance from 40% to 93% for advanced combination encoder to 80% to 100% for the suggested NMF coding strategy was observed.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ujeoxs
via IFTTT

A Binaural Steering Beamformer System for Enhancing a Moving Speech Source

In many daily life communication situations, several sound sources are simultaneously active. While normal-hearing listeners can easily distinguish the target sound source from interfering sound sources—as long as target and interferers are spatially or spectrally separated—and concentrate on the target, hearing-impaired listeners and cochlear implant users have difficulties in making such a distinction. In this article, we propose a binaural approach composed of a spatial filter controlled by a direction-of-arrival estimator to track and enhance a moving target sound. This approach was implemented on a real-time signal processing platform enabling experiments with test subjects in situ. To evaluate the proposed method, a data set of sound signals with a single moving sound source in an anechoic diffuse noise environment was generated using virtual acoustics. The proposed steering method was compared with a fixed (nonsteering) method that enhances sound from the frontal direction in an objective evaluation and subjective experiments using this database. In both cases, the obtained results indicated a significant improvement in speech intelligibility and quality compared with the unprocessed signal. Furthermore, the proposed method outperformed the nonsteering method.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YSyXHO
via IFTTT

Simultaneous Assessment of Speech Identification and Spatial Discrimination: A Potential Testing Approach for Bilateral Cochlear Implant Users?

With increasing numbers of children and adults receiving bilateral cochlear implants, there is an urgent need for assessment tools that enable testing of binaural hearing abilities. Current test batteries are either limited in scope or are of an impractical duration for routine testing. Here, we report a behavioral test that enables combined testing of speech identification and spatial discrimination in noise. In this task, multitalker babble was presented from all speakers, and pairs of speech tokens were sequentially presented from two adjacent speakers. Listeners were required to identify both words from a closed set of four possibilities and to determine whether the second token was presented to the left or right of the first. In Experiment 1, normal-hearing adult listeners were tested at 15° intervals throughout the frontal hemifield. Listeners showed highest spatial discrimination performance in and around the frontal midline, with a decline at more eccentric locations. In contrast, speech identification abilities were least accurate near the midline and showed an improvement in performance at more lateral locations. In Experiment 2, normal-hearing listeners were assessed using a restricted range of speaker locations designed to match those found in clinical testing environments. Here, speakers were separated by 15° around the midline and 30° at more lateral locations. This resulted in a similar pattern of behavioral results as in Experiment 1. We conclude, this test offers the potential to assess both spatial discrimination and the ability to use spatial information for unmasking in clinical populations.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjepkZ
via IFTTT

Sensitivity to Envelope Interaural Time Differences at High Modulation Rates

Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UjemWr
via IFTTT

Advancing Binaural Cochlear Implant Technology

This special issue contains a collection of 13 papers highlighting the collaborative research and engineering project entitled Advancing Binaural Cochlear Implant Technology—ABCIT—as well as research spin-offs from the project. In this introductory editorial, a brief history of the project is provided, alongside an overview of the studies.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyYvg
via IFTTT

A Comparison of Two Objective Measures of Binaural Processing: The Interaural Phase Modulation Following Response and the Binaural Interaction Component

There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)—the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural – (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations—differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UjemFY
via IFTTT

Reducing Current Spread by Use of a Novel Pulse Shape for Electrical Stimulation of the Auditory Nerve

Improving the electrode-neuron interface to reduce current spread between individual electrodes has been identified as one of the main objectives in the search for future improvements in cochlear-implant performance. Here, we address this problem by presenting a novel stimulation strategy that takes account of the biophysical properties of the auditory neurons (spiral ganglion neurons, SGNs) stimulated in electrical hearing. This new strategy employs a ramped pulse shape, where the maximum amplitude is achieved through a linear slope in the injected current. We present the theoretical framework that supports this new strategy and that suggests it will improve the modulation of SGNs’ activity by exploiting their sensitivity to the rising slope of current pulses. The theoretical consequence of this sensitivity to the slope is a reduction in the spread of excitation within the cochlea and, consequently, an increase in the neural dynamic range. To explore the impact of the novel stimulation method on neural activity, we performed in vitro recordings of SGNs in culture. We show that the stimulus efficacy required to evoke action potentials in SGNs falls as the stimulus slope decreases. This work lays the foundation for a novel, and more biomimetic, stimulation strategy with considerable potential for implementation in cochlear-implant technology.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSz06x
via IFTTT

A Binaural CI Research Platform for Oticon Medical SP/XP Implants Enabling ITD/ILD and Variable Rate Processing

We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ujempx
via IFTTT

Sparse Nonnegative Matrix Factorization Strategy for Cochlear Implants

Current cochlear implant (CI) strategies carry speech information via the waveform envelope in frequency subbands. CIs require efficient speech processing to maximize information transfer to the brain, especially in background noise, where the speech envelope is not robust to noise interference. In such conditions, the envelope, after decomposition into frequency bands, may be enhanced by sparse transformations, such as nonnegative matrix factorization (NMF). Here, a novel CI processing algorithm is described, which works by applying NMF to the envelope matrix (envelopogram) of 22 frequency channels in order to improve performance in noisy environments. It is evaluated for speech in eight-talker babble noise. The critical sparsity constraint parameter was first tuned using objective measures and then evaluated with subjective speech perception experiments for both normal hearing and CI subjects. Results from vocoder simulations with 10 normal hearing subjects showed that the algorithm significantly enhances speech intelligibility with the selected sparsity constraints. Results from eight CI subjects showed no significant overall improvement compared with the standard advanced combination encoder algorithm, but a trend toward improvement of word identification of about 10 percentage points at +15 dB signal-to-noise ratio (SNR) was observed in the eight CI subjects. Additionally, a considerable reduction of the spread of speech perception performance from 40% to 93% for advanced combination encoder to 80% to 100% for the suggested NMF coding strategy was observed.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Ujeoxs
via IFTTT

Comparing Binaural Pre-processing Strategies II: Speech Intelligibility of Bilateral Cochlear Implant Users

Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YSyWUe
via IFTTT

A Binaural Steering Beamformer System for Enhancing a Moving Speech Source

In many daily life communication situations, several sound sources are simultaneously active. While normal-hearing listeners can easily distinguish the target sound source from interfering sound sources—as long as target and interferers are spatially or spectrally separated—and concentrate on the target, hearing-impaired listeners and cochlear implant users have difficulties in making such a distinction. In this article, we propose a binaural approach composed of a spatial filter controlled by a direction-of-arrival estimator to track and enhance a moving target sound. This approach was implemented on a real-time signal processing platform enabling experiments with test subjects in situ. To evaluate the proposed method, a data set of sound signals with a single moving sound source in an anechoic diffuse noise environment was generated using virtual acoustics. The proposed steering method was compared with a fixed (nonsteering) method that enhances sound from the frontal direction in an objective evaluation and subjective experiments using this database. In both cases, the obtained results indicated a significant improvement in speech intelligibility and quality compared with the unprocessed signal. Furthermore, the proposed method outperformed the nonsteering method.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyXHO
via IFTTT

Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation

In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UjenJX
via IFTTT

Sensitivity to Envelope Interaural Time Differences at High Modulation Rates

Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjemWr
via IFTTT

Comparing Binaural Pre-processing Strategies III: Speech Intelligibility of Normal-Hearing and Hearing-Impaired Listeners

A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ujem8P
via IFTTT

A Comparison of Two Objective Measures of Binaural Processing: The Interaural Phase Modulation Following Response and the Binaural Interaction Component

There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)—the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural – (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations—differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjemFY
via IFTTT

Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YSyTYz
via IFTTT

A Binaural CI Research Platform for Oticon Medical SP/XP Implants Enabling ITD/ILD and Variable Rate Processing

We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Ujempx
via IFTTT

Comparing Binaural Pre-processing Strategies II: Speech Intelligibility of Bilateral Cochlear Implant Users

Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyWUe
via IFTTT

Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation

In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjenJX
via IFTTT

Comparing Binaural Pre-processing Strategies III: Speech Intelligibility of Normal-Hearing and Hearing-Impaired Listeners

A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Ujem8P
via IFTTT

Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyTYz
via IFTTT

Simultaneous Assessment of Speech Identification and Spatial Discrimination: A Potential Testing Approach for Bilateral Cochlear Implant Users?

With increasing numbers of children and adults receiving bilateral cochlear implants, there is an urgent need for assessment tools that enable testing of binaural hearing abilities. Current test batteries are either limited in scope or are of an impractical duration for routine testing. Here, we report a behavioral test that enables combined testing of speech identification and spatial discrimination in noise. In this task, multitalker babble was presented from all speakers, and pairs of speech tokens were sequentially presented from two adjacent speakers. Listeners were required to identify both words from a closed set of four possibilities and to determine whether the second token was presented to the left or right of the first. In Experiment 1, normal-hearing adult listeners were tested at 15° intervals throughout the frontal hemifield. Listeners showed highest spatial discrimination performance in and around the frontal midline, with a decline at more eccentric locations. In contrast, speech identification abilities were least accurate near the midline and showed an improvement in performance at more lateral locations. In Experiment 2, normal-hearing listeners were assessed using a restricted range of speaker locations designed to match those found in clinical testing environments. Here, speakers were separated by 15° around the midline and 30° at more lateral locations. This resulted in a similar pattern of behavioral results as in Experiment 1. We conclude, this test offers the potential to assess both spatial discrimination and the ability to use spatial information for unmasking in clinical populations.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjepkZ
via IFTTT

Advancing Binaural Cochlear Implant Technology

This special issue contains a collection of 13 papers highlighting the collaborative research and engineering project entitled Advancing Binaural Cochlear Implant Technology—ABCIT—as well as research spin-offs from the project. In this introductory editorial, a brief history of the project is provided, alongside an overview of the studies.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyYvg
via IFTTT

Reducing Current Spread by Use of a Novel Pulse Shape for Electrical Stimulation of the Auditory Nerve

Improving the electrode-neuron interface to reduce current spread between individual electrodes has been identified as one of the main objectives in the search for future improvements in cochlear-implant performance. Here, we address this problem by presenting a novel stimulation strategy that takes account of the biophysical properties of the auditory neurons (spiral ganglion neurons, SGNs) stimulated in electrical hearing. This new strategy employs a ramped pulse shape, where the maximum amplitude is achieved through a linear slope in the injected current. We present the theoretical framework that supports this new strategy and that suggests it will improve the modulation of SGNs’ activity by exploiting their sensitivity to the rising slope of current pulses. The theoretical consequence of this sensitivity to the slope is a reduction in the spread of excitation within the cochlea and, consequently, an increase in the neural dynamic range. To explore the impact of the novel stimulation method on neural activity, we performed in vitro recordings of SGNs in culture. We show that the stimulus efficacy required to evoke action potentials in SGNs falls as the stimulus slope decreases. This work lays the foundation for a novel, and more biomimetic, stimulation strategy with considerable potential for implementation in cochlear-implant technology.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSz06x
via IFTTT

Sparse Nonnegative Matrix Factorization Strategy for Cochlear Implants

Current cochlear implant (CI) strategies carry speech information via the waveform envelope in frequency subbands. CIs require efficient speech processing to maximize information transfer to the brain, especially in background noise, where the speech envelope is not robust to noise interference. In such conditions, the envelope, after decomposition into frequency bands, may be enhanced by sparse transformations, such as nonnegative matrix factorization (NMF). Here, a novel CI processing algorithm is described, which works by applying NMF to the envelope matrix (envelopogram) of 22 frequency channels in order to improve performance in noisy environments. It is evaluated for speech in eight-talker babble noise. The critical sparsity constraint parameter was first tuned using objective measures and then evaluated with subjective speech perception experiments for both normal hearing and CI subjects. Results from vocoder simulations with 10 normal hearing subjects showed that the algorithm significantly enhances speech intelligibility with the selected sparsity constraints. Results from eight CI subjects showed no significant overall improvement compared with the standard advanced combination encoder algorithm, but a trend toward improvement of word identification of about 10 percentage points at +15 dB signal-to-noise ratio (SNR) was observed in the eight CI subjects. Additionally, a considerable reduction of the spread of speech perception performance from 40% to 93% for advanced combination encoder to 80% to 100% for the suggested NMF coding strategy was observed.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Ujeoxs
via IFTTT

A Binaural Steering Beamformer System for Enhancing a Moving Speech Source

In many daily life communication situations, several sound sources are simultaneously active. While normal-hearing listeners can easily distinguish the target sound source from interfering sound sources—as long as target and interferers are spatially or spectrally separated—and concentrate on the target, hearing-impaired listeners and cochlear implant users have difficulties in making such a distinction. In this article, we propose a binaural approach composed of a spatial filter controlled by a direction-of-arrival estimator to track and enhance a moving target sound. This approach was implemented on a real-time signal processing platform enabling experiments with test subjects in situ. To evaluate the proposed method, a data set of sound signals with a single moving sound source in an anechoic diffuse noise environment was generated using virtual acoustics. The proposed steering method was compared with a fixed (nonsteering) method that enhances sound from the frontal direction in an objective evaluation and subjective experiments using this database. In both cases, the obtained results indicated a significant improvement in speech intelligibility and quality compared with the unprocessed signal. Furthermore, the proposed method outperformed the nonsteering method.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyXHO
via IFTTT

Sensitivity to Envelope Interaural Time Differences at High Modulation Rates

Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjemWr
via IFTTT

A Comparison of Two Objective Measures of Binaural Processing: The Interaural Phase Modulation Following Response and the Binaural Interaction Component

There has been continued interest in clinical objective measures of binaural processing. One commonly proposed measure is the binaural interaction component (BIC), which is obtained typically by recording auditory brainstem responses (ABRs)—the BIC reflects the difference between the binaural ABR and the sum of the monaural ABRs (i.e., binaural – (left + right)). We have recently developed an alternative, direct measure of sensitivity to interaural time differences, namely, a following response to modulations in interaural phase difference (the interaural phase modulation following response; IPM-FR). To obtain this measure, an ongoing diotically amplitude-modulated signal is presented, and the interaural phase difference of the carrier is switched periodically at minima in the modulation cycle. Such periodic modulations to interaural phase difference can evoke a steady state following response. BIC and IPM-FR measurements were compared from 10 normal-hearing subjects using a 16-channel electroencephalographic system. Both ABRs and IPM-FRs were observed most clearly from similar electrode locations—differential recordings taken from electrodes near the ear (e.g., mastoid) in reference to a vertex electrode (Cz). Although all subjects displayed clear ABRs, the BIC was not reliably observed. In contrast, the IPM-FR typically elicited a robust and significant response. In addition, the IPM-FR measure required a considerably shorter recording session. As the IPM-FR magnitude varied with interaural phase difference modulation depth, it could potentially serve as a correlate of perceptual salience. Overall, the IPM-FR appears a more suitable clinical measure than the BIC.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjemFY
via IFTTT

A Binaural CI Research Platform for Oticon Medical SP/XP Implants Enabling ITD/ILD and Variable Rate Processing

We present the first portable, binaural, real-time research platform compatible with Oticon Medical SP and XP generation cochlear implants. The platform consists of (a) a pair of behind-the-ear devices, each containing front and rear calibrated microphones, (b) a four-channel USB analog-to-digital converter, (c) real-time PC-based sound processing software called the Master Hearing Aid, and (d) USB-connected hardware and output coils capable of driving two implants simultaneously. The platform is capable of processing signals from the four microphones simultaneously and producing synchronized binaural cochlear implant outputs that drive two (bilaterally implanted) SP or XP implants. Both audio signal preprocessing algorithms (such as binaural beamforming) and novel binaural stimulation strategies (within the implant limitations) can be programmed by researchers. When the whole research platform is combined with Oticon Medical SP implants, interaural electrode timing can be controlled on individual electrodes to within ±1 µs and interaural electrode energy differences can be controlled to within ±2%. Hence, this new platform is particularly well suited to performing experiments related to interaural time differences in combination with interaural level differences in real-time. The platform also supports instantaneously variable stimulation rates and thereby enables investigations such as the effect of changing the stimulation rate on pitch perception. Because the processing can be changed on the fly, researchers can use this platform to study perceptual changes resulting from different processing strategies acutely.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Ujempx
via IFTTT

Comparing Binaural Pre-processing Strategies II: Speech Intelligibility of Bilateral Cochlear Implant Users

Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyWUe
via IFTTT

Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation

In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios.



from #Audiology via ola Kala on Inoreader http://ift.tt/1UjenJX
via IFTTT

Comparing Binaural Pre-processing Strategies III: Speech Intelligibility of Normal-Hearing and Hearing-Impaired Listeners

A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Ujem8P
via IFTTT

Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YSyTYz
via IFTTT

Frequency and Demographics of Gentamicin Use.

Objective: To understand how aminoglycosides such as gentamicin are used in a tertiary care setting. To familiarize otologists with the demographics and risk factors associated with gentamicin use at major medical centers to allow the possibility of early intervention. Study Design: Retrospective review of existing clinical data. Setting: University of Rochester Medical Center (URMC), including all associated hospitals (Strong Memorial Hospital, Highland Hospital, etc.). Patients: All hospital inpatients who were prescribed intravenous gentamicin over a 4-year period starting in February 2011. Interventions: None. Main Outcome Measures: Major patient populations receiving gentamicin and the associated diagnoses for which gentamicin was prescribed. Results: A total of 5,257 patients were found to have received gentamicin. Three major populations of patients were found to have received gentamicin: 1) more than half the gentamicin exposures were children and 42% were under 2 years. 2) 18% of the exposures were young adults age 18 to 34 and in this population 88% were woman with most of these hospitalizations pregnancy related. 3) Patients >55 were 19% of the exposures and most of these had serious infections. Disorders associated with patients receiving gentamicin included: perinatal complications (1,564); sepsis (1,399); acute/chronic renal disease (1,287); labor, delivery, or neonatal complications (1,250); diabetes (949); and UTI/pyelonephritis (775). Conclusions: Gentamicin is still widely used, and the neonatal population and young adult women are at especially high risk for gentamicin-induced ototoxicity. Further data analysis should focus strategies to protect these populations by avoiding unnecessary exposures and by possible concurrent administration of protective medications such as metformin and aspirin. Copyright (C) 2015 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1NXqmsS
via IFTTT

Results of Postoperative, CT-based, Electrode Deactivation on Hearing in Prelingually Deafened Adult Cochlear Implant Recipients.

Objective: To test the use of a novel, image-guided cochlear implant (CI) programming (IGCIP) technique on prelingually deafened, adult CI recipients. Study Design: Prospective unblinded study. Setting: Tertiary referral center. Patients: Twenty-six prelingually deafened adult CI recipients with 29 CIs (3 bilateral). Intervention(s): Temporal-bone CT scans were used as input to a series of semiautomated computer algorithms which estimate the location of electrodes in reference to the modiolus. This information was used to selectively deactivate suboptimally located electrodes, i.e., those for which the distance from the electrode to the modiolus was further than a neighboring electrode to the same site. Patients used the new IGCIP program exclusively for 3-5 weeks. Main Outcome Measure(s): Minimum Speech Test Battery (MSTB), quality of life (QOL), and spectral modulation detection (SMD). Results: On average one-third of electrodes were deactivated. At the group level, no significant differences were noted for MSTB measures nor for QOL estimates. Average SMD significantly improved after IGCIP reprogramming, which is consistent with improved spatial selectivity. Using 95% confidence interval data for CNC, AzBio, and BKB-SIN at the individual level, 76 to 90% of subjects demonstrated equivocal or significant improvement. Ultimately 21 of 29 (72.41%) elected to keep the IGCIP map because of perceived benefit often substantiated by improvement on either MSTB, QOL, and/or SMD. Conclusions: Knowledge of the geometric relationship between CI electrodes and the modiolus appears to be useful in adjusting CI maps in prelingually deafened adults. Long-term improvements may be observed resulting from improved spatial selectivity and spectral resolution. Copyright (C) 2015 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1SoxVOB
via IFTTT

Cavernous Angiomyoma of the Internal Auditory Canal.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1NXqnNm
via IFTTT

Comparisons of Longitudinal Trajectories of Social Competence: Parent Ratings of Children With Cochlear Implants Versus Hearing Peers.

Objective: To evaluate the longitudinal effects of cochlear implantation (CIs) on young, deaf children's social competence over 5 years of implant use and compare their social skills to those of same-aged, hearing peers. Study Design: Prospective, longitudinal between- and within-subjects design, with assessments completed 3 times over 5 years. Setting: This study was conducted at 6 cochlear implant centers and two preschools that enrolled both CI and hearing children. Patients: Parents of 132 children with CIs and 67 age-matched hearing controls completed the study measures. Children were between 5 and 9 years of age at the first time point. Interventions: Cochlear implantation and speech-language therapy. Main Outcome Measures: Three subscales were drawn from 2 standardized measures of behavioral and social functioning, the Behavioral Assessment Scale for Children (Adaptability, Social Skills) and the Social Skills Rating System (Social Skills). A latent social competence variable was created using multiple subscales, which was modeled over time. Results: Parent data indicated that children with CIs were delayed in comparison to their hearing peers on the social competence latent variable across all time points. Further, there was minimal evidence of "catch-up" growth over this 5-year period. Conclusion: Children with CIs continued to experience delays in social competence after several years of implant use. Despite documented gains in oral language, deficits in social competence remained. To date, no interventions for children with CIs have targeted these social and behavioral skills. Thus, interventions that address the functioning of the "whole child" following cochlear implantation are needed. Copyright (C) 2015 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1NXqmcg
via IFTTT

Bilateral Facial and Trigeminal Nerve Hypertrophy in a Patient With Polyneuropathy.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1SoxTq9
via IFTTT

Is MRI Equal to CT in the Evaluation of Thin and Dehiscent Superior Semicircular Canals?.

Objective: Can magnetic resonance imaging (MRI) diagnose abnormally thin and dehiscent superior semicircular canals (SSCs) that traditionally rely on evaluation by computed tomography (CT) imaging? Study Design: Retrospective clinical study. Setting: Tertiary referral center. Patients: Adults who underwent both MRI and CT of the temporal bones over the past 3 years. Interventions: CT and MR images of SSCs were separately reviewed, in a blinded fashion by three neuroradiologists at our institution. CT diagnosis of abnormally thin or dehiscent SSC was used as the "gold" standard. Main Outcome Measures: 1) Dehiscent SSC. 2) Abnormally thin SSC. 3) Normal SSC. Results: One hundred temporal bones with evaluable superior semicircular canals from 51 patients were eligible for review on CT and MR imaging. There were 26 patients of thin SSC and 17 patients of SSC dehiscence on CT imaging, of which 13 and 15 respectively were also found on MRI. There were nine false-positive dehiscent SSC patients and four thin SSC patients observed on MR imaging while not observed on CT. For thin SSCs, MRI sensitivity was 61.9% and specificity of 94.3% with a positive predictive value of 81.3% and a negative predictive value of 86.2%. For dehiscent SSCs, sensitivity was 88.2% and specificity of 89.2% with a positive predictive value of 62.5% and a negative predictive value of 97.4%. Conclusion: In this series, MRI in the axial and coronal plane had a high negative predicative value for thin SSC (86%) and dehiscent SSC (97%). However, MRI cannot conclusively diagnose thin or dehiscent SSCs. Copyright (C) 2015 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1NXqngA
via IFTTT

Atrophy of the Stria Vascularis.

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1P2Sef9
via IFTTT

Cochlear Implant Access in Six Developed Countries.

Background: Access to cochlear implantation varies greatly around the world. It is affected by factors that are specific to each country's health care system, by awareness, and by societal attitudes regarding deafness. Methods: Cochlear implant clinicians and researchers from six countries explored and discussed these variations and their likely causes: Robert Briggs from Australia; Wolfe-Dieter Baumgartner from Austria; Thomas Lenarz from Germany; Eva Koltharp from Sweden; Christopher Raine from the United Kingdom, and Craig Buchman, Donna Sorkin, and Christine Yoshinago from the United States. Results: Utilization rates are quite different for the pediatric and adult demographics in all six countries. Pediatric utilization ranges in the six countries (all in the developed world) ranged from a low of 50% in the United States to a high of 97% in Australia. Adult utilization is less than 10% everywhere in the world. Conclusions: Pediatric access to care was excellent for children with the exception of Germany and the United States where there is an inadequate referral system. Adult utilization was low everywhere because of the lack of screening for adults and the fact that primary care physicians and even audiologists are unfamiliar with CI candidacy criteria and outcomes, and hence typically do not make patient referrals. Copyright (C) 2015 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1P2SdYK
via IFTTT

Chronic Symptoms After Vestibular Neuritis and the High-Velocity Vestibulo-Ocular Reflex.

Hypothesis: As the anterior and posterior semicircular canals are vital to the regulation of gaze stability, particularly during locomotion or vehicular travel, we tested whether the high-velocity vestibulo-ocular reflex (VOR) of the three ipsilesional semicircular canals elicited by the modified Head Impulse Test would correlate with subjective dizziness or vertigo scores after vestibular neuritis (VN). Background: Recovery after acute VN varies with around half reporting persistent symptoms long after the acute episode. However, an unanswered question is whether chronic symptoms are associated with impairment of the high-velocity VOR of the anterior or posterior canals. Methods: Twenty patients who had experienced an acute episode of VN at least 3 months earlier were included in this study. Participants were assessed with the video head impulse test (vHIT) of all six canals, bithermal caloric irrigation, the Dizziness Handicap Inventory (DHI), and the Vertigo Symptoms Scale short-form (VSS). Results: Of these 20 patients, 12 thought that they had recovered from the initial episode whereas 8 did not and reported elevated DHI and VSS scores. However, we found no correlation between DHI or VSS scores and the ipsilesional single or combined vHIT gain, vHIT gain asymmetry orcaloric paresis. The high-velocity VOR was not different between patients who thought they had recovered and patients who thought they had not. Conclusion: Our findings suggest that chronic symptoms of dizziness after VN are not associated with the high-velocity VOR of the single or combined ipsilesional horizontal, anterior, or posterior semicircular canals. Copyright (C) 2015 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1R1QKYS
via IFTTT

Usual walking speed and all-cause mortality risk in older people: A systematic review and meta-analysis

Publication date: February 2016
Source:Gait & Posture, Volume 44
Author(s): Bing Liu, Xinhua Hu, Qiang Zhang, Yichuan Fan, Jun Li, Rui Zou, Ming Zhang, Xiuqi Wang, Junpeng Wang
The purpose of this study was to investigate the relationship between slow usual walking speed and all-cause mortality risk in older people by conducting a meta-analysis. We searched through the Pubmed, Embase and Cochrane Library database up to March 2015. Only prospective observational studies that investigating the usual walking speed and all-cause mortality risk in older adulthood approaching age 65 years or more were included. Walking speed should be specifically assessed as a single-item tool over a short distance. Pooled adjusted risk ratio (RR) and 95% confidence interval (CI) were computed for the lowest versus the highest usual walking speed category. A total of 9 studies involving 12,901 participants were included. Meta-analysis with random effect model showed that the pooled adjusted RR of all-cause mortality was 1.89 (95% CI 1.46–2.46) comparing the lowest to the highest usual walk speed. Subgroup analyses indicated that risk of all-cause mortality for slow usual walking speed appeared to be not significant among women (RR 1.45; 95% CI 0.95–2.20). Slow usual walking speed is an independent predictor of all-cause mortality in men but not in women among older adulthood approaching age 65 years or more.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Omr3Qh
via IFTTT

Usual walking speed and all-cause mortality risk in older people: A systematic review and meta-analysis

Publication date: February 2016
Source:Gait & Posture, Volume 44
Author(s): Bing Liu, Xinhua Hu, Qiang Zhang, Yichuan Fan, Jun Li, Rui Zou, Ming Zhang, Xiuqi Wang, Junpeng Wang
The purpose of this study was to investigate the relationship between slow usual walking speed and all-cause mortality risk in older people by conducting a meta-analysis. We searched through the Pubmed, Embase and Cochrane Library database up to March 2015. Only prospective observational studies that investigating the usual walking speed and all-cause mortality risk in older adulthood approaching age 65 years or more were included. Walking speed should be specifically assessed as a single-item tool over a short distance. Pooled adjusted risk ratio (RR) and 95% confidence interval (CI) were computed for the lowest versus the highest usual walking speed category. A total of 9 studies involving 12,901 participants were included. Meta-analysis with random effect model showed that the pooled adjusted RR of all-cause mortality was 1.89 (95% CI 1.46–2.46) comparing the lowest to the highest usual walk speed. Subgroup analyses indicated that risk of all-cause mortality for slow usual walking speed appeared to be not significant among women (RR 1.45; 95% CI 0.95–2.20). Slow usual walking speed is an independent predictor of all-cause mortality in men but not in women among older adulthood approaching age 65 years or more.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Omr3Qh
via IFTTT

Τετάρτη 30 Δεκεμβρίου 2015

Bidirectional Interference Between Speech and Nonspeech Tasks in Younger, Middle-Aged, and Older Adults

Purpose
The purpose of this study was to examine divided attention over a large age range by looking at the effects of 3 nonspeech tasks on concurrent speech motor performance. The nonspeech tasks were designed to facilitate measurement of bidirectional interference, allowing examination of their sensitivity to speech activity. A cross-sectional design was selected to explore possible changes in divided-attention effects associated with age.
Method
Sixty healthy participants were separated into 3 groups of 20: younger (20s), middle-aged (40s), and older (60s) adults. Each participant completed a speech task (sentence repetitions) once in isolation and once concurrently with each of 3 nonspeech tasks: a semantic-decision linguistic task, a quantitative-comparison cognitive task, and a manual motor task. The nonspeech tasks were also performed in isolation.
Results
Data from speech kinematics and nonspeech task performance indicated significant task-specific divided attention interference, with divided attention affecting speech and nonspeech measures in the linguistic and cognitive conditions and affecting speech measures in the manual motor condition. There was also a significant age effect for utterance duration.
Conclusions
The results increase what is known about bidirectional interference between speech and other concurrent tasks as well as age effects on speech motor control.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1kbgFPk
via IFTTT

The Relationship Between Submental Surface Electromyography and Hyo-Laryngeal Kinematic Measures of Mendelsohn Maneuver Duration

Purpose
The Mendelsohn Maneuver (MM) is a commonly prescribed technique that is taught to individuals with dysphagia to improve swallowing ability. Due to cost and safety concerns associated with videofluoroscopy (VFS) use, submental surface electromyography (ssEMG) is commonly used in place of VFS to train the MM in clinical and research settings. However, it is unknown whether ssEMG accurately reflects the prolonged hyo-laryngeal movements required for execution of the MM. The primary goal of this study was to examine the relationship among ssEMG duration, duration of laryngeal vestibule closure, and duration of maximum hyoid elevation during MM performance.
Method
Participants included healthy adults and patients with dysphagia due to stroke. All performed the MM during synchronous ssEMG and VFS recording.
Results
Significant correlations between ssEMG duration and VFS measures of hyo-laryngeal kinematic durations during MM performance ranged from very weak to moderate. None of the correlations in the group of stroke patients reached statistical significance.
Conclusion
Clinicians and researchers should consider that the MM involves novel hyo-laryngeal kinematics that may be only moderately represented with ssEMG. Thus, there is a risk that these target therapeutic movements are not consistently being trained.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UdaJ4W
via IFTTT

Preliteracy Speech Sound Production Skill and Linguistic Characteristics of Grade 3 Spellings: A Study Using the Templin Archive

Purpose
This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the time data were collected.
Method
Participants (N = 250), selected from the Templin Archive (Templin, 2004), varied on prekindergarten SSPS. Participants' real word spellings in Grade 3 were evaluated using a metric of linguistic knowledge, the Computerized Spelling Sensitivity System (Masterson & Apel, 2013). Relationships between kindergarten speech error types and later spellings also were explored.
Results
Prekindergarten children in the lowest SPSS (7th percentile) scored poorest among articulatory subgroups on both individual spelling elements (phonetic elements, junctures, and affixes) and acceptable spelling (using relatively more omissions and illegal spelling patterns). Within the 7th percentile subgroup, there were no statistical spelling differences between those with mostly atypical speech sound errors and those with mostly typical speech sound errors.
Conclusions
Findings were consistent with predictions from dual route models of spelling that SSPS is one of many variables associated with spelling skill and that children with impaired SSPS are at risk for spelling difficulty.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ofg4K0
via IFTTT

Developing a Phonological Awareness Curriculum: Reflections on an Implementation Science Framework

Purpose
This article describes the process of developing and implementing a supplemental early literacy curriculum designed for preschoolers demonstrating delays in literacy development.
Method
Intervention research and implementation research have traditionally been viewed as sequential processes. This article illustrates a process of intervention development that was paralleled by a focus on implementation in early childhood settings. The exploration, preparation, implementation, sustainment framework is used to describe factors that need to be considered during a progression through these 4 phases of implementation. A post hoc analysis provides insight into a rather nonlinear progression of intervention development and highlights considerations and activities that have facilitated implementation.
Conclusions
The guiding principles of the exploration, preparation, implementation, sustainment implementation science framework highlight the important considerations in developing effective and practical interventions. Considering implementation and sustainment during the intervention development process and using data-based decision making has the potential to expand the availability of user-friendly evidence-based practices in communication sciences and disorders and encourage a bridging of the researcher–clinician gap.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1PxCuFr
via IFTTT

Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

Purpose
To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not expected to improve speech fluency.
Method
Ten participants with APH/AOS and 10 neurologically healthy (NH) participants were studied under both feedback conditions. To allow examination of individual responses, we used an ABACA design. Effects were examined on syllable rate, disfluency duration, and vocal intensity.
Results
Seven of 10 APH/AOS participants increased fluency with masking by increasing rate, decreasing disfluency duration, or both. In contrast, none of the NH participants increased speaking rate with MAF. In the AAF condition, only 1 APH/AOS participant increased fluency. Four APH/AOS participants and 8 NH participants slowed their rate with AAF.
Conclusions
Speaking with MAF appears to increase fluency in a subset of individuals with APH/AOS, indicating that overreliance on auditory feedback monitoring may contribute to their disorder presentation. The distinction between responders and nonresponders was not linked to AOS diagnosis, so additional work is needed to develop hypotheses for candidacy and underlying control mechanisms.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1m3SLXj
via IFTTT

Erratum



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1U0wgNu
via IFTTT

Responses to Intensity-Shifted Auditory Feedback During Running Speech

Purpose
Responses to intensity perturbation during running speech were measured to understand whether prosodic features are controlled in an independent or integrated manner.
Method
Nineteen English-speaking healthy adults (age range = 21–41 years) produced 480 sentences in which emphatic stress was placed on either the 1st or 2nd word. One participant group received an upward intensity perturbation during stressed word production, and the other group received a downward intensity perturbation. Compensations for perturbation were evaluated by comparing differences in participants' stressed and unstressed peak fundamental frequency (F0), peak intensity, and word duration during perturbed versus baseline trials.
Results
Significant increases in stressed–unstressed peak intensities were observed during the ramp and perturbation phases of the experiment in the downward group only. Compensations for F0 and duration did not reach significance for either group.
Conclusions
Consistent with previous work, speakers appear sensitive to auditory perturbations that affect a desired linguistic goal. In contrast to previous work on F0 perturbation that supported an integrated-channel model of prosodic control, the current work only found evidence for intensity-specific compensation. This discrepancy may suggest different F0 and intensity control mechanisms, threshold-dependent prosodic modulation, or a combined control scheme.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1IJUyux
via IFTTT

Bridging the Gap Between Research and Practice: Implementation Science

Purpose
This article introduces implementation science, which focuses on research methods that promote the systematic application of research findings to practice.
Method
The narrative defines implementation science and highlights the importance of moving research along the pipeline from basic science to practice as one way to facilitate evidence-based service delivery. This review identifies challenges in developing and testing interventions in order to achieve widespread adoption in practice settings. A framework for conceptualizing implementation research is provided, including an example to illustrate the application of principles in speech-language pathology. Last, the authors reflect on the status of implementation research in the discipline of communication sciences and disorders.
Conclusions
The extant literature highlights the value of implementation science for reducing the gap between research and practice in our discipline. While having unique principles guiding implementation research, many of the challenges and questions are similar to those facing any investigators who are attempting to design valid and reliable studies. This article is intended to invigorate interest in the uniqueness of implementation science among those pursuing both basic and applied research. In this way, it should help ensure the discipline's knowledge base is realized in practice and policy that affects the lives of individuals with communication disorders.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1NW7DOk
via IFTTT

Variability and Diagnostic Accuracy of Speech Intelligibility Scores in Children

Purpose
We examined variability of speech intelligibility scores and how well intelligibility scores predicted group membership among 5-year-old children with speech motor impairment (SMI) secondary to cerebral palsy and an age-matched group of typically developing (TD) children.
Method
Speech samples varying in length from 1–4 words were elicited from 24 children with cerebral palsy (mean age 60.50 months) and 20 TD children (mean age 60.33 months). Two hundred twenty adult listeners made orthographic transcriptions of speech samples (n = 5 per child).
Results
Variability associated with listeners made a significant contribution to explaining the variance in intelligibility scores for TD and SMI children, but the magnitude was greater for TD children. Intelligibility scores differentiated very well between children who have SMI and TD children when intelligibility was at or below approximately 75% and above approximately 85%.
Conclusions
Intelligibility seems to be a useful clinical tool for differentiating between TD children and children with SMI at 5 years of age; however, there is considerable variability within and between listeners, highlighting the need for more than one listener per child to ensure validity of an intelligibility measure.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UdaJ52
via IFTTT

Developing Brain Injury Interventions on Both Ends of the Treatment Continuum Depends Upon Early Research Partnerships and Feasibility Studies

Purpose
The purpose of this research article is to describe two very different lines of brain injury treatment research, both of which illuminate the benefits of implementation science.
Method
The article first describes the development and pilot of a computerized cognitive intervention and highlights how adherence to implementation science principles improved the design of the intervention. Second, the article describes the application of implementation science to the development of assistive technology for cognition.
Results
The Consolidated Framework for Implementation Research (CFIR; Damschroder et al., 2009) and the menu of implementation research strategies by Powell et al. (2012) provide a roadmap for cognitive rehabilitation researchers to attend to factors in the implementation climate that can improve the development, usability, and adoptability of new treatment methods.
Conclusion
Attention to implementation science research principles has increased the feasibility and efficacy of both impairment-based cognitive rehabilitation programs and assistive technology for cognition.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1jYShjF
via IFTTT

The Influence of Phonotactic Probability on Nonword Repetition and Fast Mapping in 3-Year-Olds With a History of Expressive Language Delay

Purpose
The purpose of this study was to examine the influence of phonotactic probability on sublexical (phonological) and lexical representations in 3-year-olds who had a history of being late talkers in comparison with their peers with typical language development.
Method
Ten 3-year-olds who were late talkers and 10 age-matched typically developing controls completed nonword repetition and fast mapping tasks; stimuli for both experimental procedures differed in phonotactic probability.
Results
Both participant groups repeated nonwords containing high phonotactic probability sequences more accurately than nonwords containing low phonotactic probability sequences. Participants with typical language showed an early advantage for fast mapping high phonotactic probability words; children who were late talkers required more exposures to the novel words to show the same advantage for fast mapping high phonotactic probability words.
Conclusions
Children who were late talkers showed similar sensitivities to phonotactic probability in nonword repetition and word learning when compared with their peers with no history of language delay. However, word learning in children who were late talkers appeared to be slower when compared with their peers.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1T8DxL7
via IFTTT

Comparison of Psychophysiological and Dual-Task Measures of Listening Effort

Purpose
We wished to make a comparison of psychophysiological measures of listening effort with subjective and dual-task measures of listening effort for a diotic-dichotic-digits and a sentences-in-noise task.
Method
Three groups of young adults (18–38 years old) with normal hearing participated in three experiments: two psychophysiological studies for two different listening tasks and a dual-task measure for a sentences-in-noise task. Psychophysiological variables included skin conductance, heart-rate variability, and heart rate; the dual-task measure was a letter-identification task. Heart-rate variability was quantified with the difference from baseline for the normalized standard deviation of R to R.
Results
Heart-rate variability differences from baseline were greater for increased task complexity and for poorer signal-to-noise ratios (SNRs). The dual-task measure of listening effort also increased for sentences presented at a +5 dB SNR compared with a +15 dB SNR. Skin conductance was elevated for greater task complexity only, and similar across noise conditions. None of these measures were significantly correlated with subjective measures of listening effort.
Conclusions
Heart-rate variability appears to be a robust psychophysiological indicator of listening effort, sensitive to both task complexity and SNR. This sensitivity to SNR was similar to a dual-task measure of listening effort.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QcUlkZ
via IFTTT

Bidirectional Interference Between Speech and Nonspeech Tasks in Younger, Middle-Aged, and Older Adults

Purpose
The purpose of this study was to examine divided attention over a large age range by looking at the effects of 3 nonspeech tasks on concurrent speech motor performance. The nonspeech tasks were designed to facilitate measurement of bidirectional interference, allowing examination of their sensitivity to speech activity. A cross-sectional design was selected to explore possible changes in divided-attention effects associated with age.
Method
Sixty healthy participants were separated into 3 groups of 20: younger (20s), middle-aged (40s), and older (60s) adults. Each participant completed a speech task (sentence repetitions) once in isolation and once concurrently with each of 3 nonspeech tasks: a semantic-decision linguistic task, a quantitative-comparison cognitive task, and a manual motor task. The nonspeech tasks were also performed in isolation.
Results
Data from speech kinematics and nonspeech task performance indicated significant task-specific divided attention interference, with divided attention affecting speech and nonspeech measures in the linguistic and cognitive conditions and affecting speech measures in the manual motor condition. There was also a significant age effect for utterance duration.
Conclusions
The results increase what is known about bidirectional interference between speech and other concurrent tasks as well as age effects on speech motor control.

from #Audiology via ola Kala on Inoreader http://ift.tt/1kbgFPk
via IFTTT

The Relationship Between Submental Surface Electromyography and Hyo-Laryngeal Kinematic Measures of Mendelsohn Maneuver Duration

Purpose
The Mendelsohn Maneuver (MM) is a commonly prescribed technique that is taught to individuals with dysphagia to improve swallowing ability. Due to cost and safety concerns associated with videofluoroscopy (VFS) use, submental surface electromyography (ssEMG) is commonly used in place of VFS to train the MM in clinical and research settings. However, it is unknown whether ssEMG accurately reflects the prolonged hyo-laryngeal movements required for execution of the MM. The primary goal of this study was to examine the relationship among ssEMG duration, duration of laryngeal vestibule closure, and duration of maximum hyoid elevation during MM performance.
Method
Participants included healthy adults and patients with dysphagia due to stroke. All performed the MM during synchronous ssEMG and VFS recording.
Results
Significant correlations between ssEMG duration and VFS measures of hyo-laryngeal kinematic durations during MM performance ranged from very weak to moderate. None of the correlations in the group of stroke patients reached statistical significance.
Conclusion
Clinicians and researchers should consider that the MM involves novel hyo-laryngeal kinematics that may be only moderately represented with ssEMG. Thus, there is a risk that these target therapeutic movements are not consistently being trained.

from #Audiology via ola Kala on Inoreader http://ift.tt/1UdaJ4W
via IFTTT

Preliteracy Speech Sound Production Skill and Linguistic Characteristics of Grade 3 Spellings: A Study Using the Templin Archive

Purpose
This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the time data were collected.
Method
Participants (N = 250), selected from the Templin Archive (Templin, 2004), varied on prekindergarten SSPS. Participants' real word spellings in Grade 3 were evaluated using a metric of linguistic knowledge, the Computerized Spelling Sensitivity System (Masterson & Apel, 2013). Relationships between kindergarten speech error types and later spellings also were explored.
Results
Prekindergarten children in the lowest SPSS (7th percentile) scored poorest among articulatory subgroups on both individual spelling elements (phonetic elements, junctures, and affixes) and acceptable spelling (using relatively more omissions and illegal spelling patterns). Within the 7th percentile subgroup, there were no statistical spelling differences between those with mostly atypical speech sound errors and those with mostly typical speech sound errors.
Conclusions
Findings were consistent with predictions from dual route models of spelling that SSPS is one of many variables associated with spelling skill and that children with impaired SSPS are at risk for spelling difficulty.

from #Audiology via ola Kala on Inoreader http://ift.tt/1Ofg4K0
via IFTTT

Developing a Phonological Awareness Curriculum: Reflections on an Implementation Science Framework

Purpose
This article describes the process of developing and implementing a supplemental early literacy curriculum designed for preschoolers demonstrating delays in literacy development.
Method
Intervention research and implementation research have traditionally been viewed as sequential processes. This article illustrates a process of intervention development that was paralleled by a focus on implementation in early childhood settings. The exploration, preparation, implementation, sustainment framework is used to describe factors that need to be considered during a progression through these 4 phases of implementation. A post hoc analysis provides insight into a rather nonlinear progression of intervention development and highlights considerations and activities that have facilitated implementation.
Conclusions
The guiding principles of the exploration, preparation, implementation, sustainment implementation science framework highlight the important considerations in developing effective and practical interventions. Considering implementation and sustainment during the intervention development process and using data-based decision making has the potential to expand the availability of user-friendly evidence-based practices in communication sciences and disorders and encourage a bridging of the researcher–clinician gap.

from #Audiology via ola Kala on Inoreader http://ift.tt/1PxCuFr
via IFTTT

Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

Purpose
To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not expected to improve speech fluency.
Method
Ten participants with APH/AOS and 10 neurologically healthy (NH) participants were studied under both feedback conditions. To allow examination of individual responses, we used an ABACA design. Effects were examined on syllable rate, disfluency duration, and vocal intensity.
Results
Seven of 10 APH/AOS participants increased fluency with masking by increasing rate, decreasing disfluency duration, or both. In contrast, none of the NH participants increased speaking rate with MAF. In the AAF condition, only 1 APH/AOS participant increased fluency. Four APH/AOS participants and 8 NH participants slowed their rate with AAF.
Conclusions
Speaking with MAF appears to increase fluency in a subset of individuals with APH/AOS, indicating that overreliance on auditory feedback monitoring may contribute to their disorder presentation. The distinction between responders and nonresponders was not linked to AOS diagnosis, so additional work is needed to develop hypotheses for candidacy and underlying control mechanisms.

from #Audiology via ola Kala on Inoreader http://ift.tt/1m3SLXj
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader http://ift.tt/1U0wgNu
via IFTTT

Responses to Intensity-Shifted Auditory Feedback During Running Speech

Purpose
Responses to intensity perturbation during running speech were measured to understand whether prosodic features are controlled in an independent or integrated manner.
Method
Nineteen English-speaking healthy adults (age range = 21–41 years) produced 480 sentences in which emphatic stress was placed on either the 1st or 2nd word. One participant group received an upward intensity perturbation during stressed word production, and the other group received a downward intensity perturbation. Compensations for perturbation were evaluated by comparing differences in participants' stressed and unstressed peak fundamental frequency (F0), peak intensity, and word duration during perturbed versus baseline trials.
Results
Significant increases in stressed–unstressed peak intensities were observed during the ramp and perturbation phases of the experiment in the downward group only. Compensations for F0 and duration did not reach significance for either group.
Conclusions
Consistent with previous work, speakers appear sensitive to auditory perturbations that affect a desired linguistic goal. In contrast to previous work on F0 perturbation that supported an integrated-channel model of prosodic control, the current work only found evidence for intensity-specific compensation. This discrepancy may suggest different F0 and intensity control mechanisms, threshold-dependent prosodic modulation, or a combined control scheme.

from #Audiology via ola Kala on Inoreader http://ift.tt/1IJUyux
via IFTTT