Τετάρτη 13 Απριλίου 2016

The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Paula C. Stacey, Pádraig T. Kitterick, Saffron D. Morris, Christian J. Sumner
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YucwEr
via IFTTT

Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Tobias Teichert
The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VWkS8v
via IFTTT

The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Paula C. Stacey, Pádraig T. Kitterick, Saffron D. Morris, Christian J. Sumner
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YucwEr
via IFTTT

Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Tobias Teichert
The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VWkS8v
via IFTTT

The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

alertIcon.gif

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Paula C. Stacey, Pádraig T. Kitterick, Saffron D. Morris, Christian J. Sumner
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YucwEr
via IFTTT

Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components

alertIcon.gif

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Tobias Teichert
The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VWkS8v
via IFTTT

The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

alertIcon.gif

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Paula C. Stacey, Pádraig T. Kitterick, Saffron D. Morris, Christian J. Sumner
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YucwEr
via IFTTT

Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components

alertIcon.gif

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Tobias Teichert
The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VWkS8v
via IFTTT

The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

alertIcon.gif

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Paula C. Stacey, Pádraig T. Kitterick, Saffron D. Morris, Christian J. Sumner
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues.



from #Audiology via ola Kala on Inoreader http://ift.tt/1YucwEr
via IFTTT

Tonal frequency affects amplitude but not topography of rhesus monkey cranial EEG components

alertIcon.gif

Publication date: Available online 13 April 2016
Source:Hearing Research
Author(s): Tobias Teichert
The rhesus monkey is an important model of human auditory function in general and auditory deficits in neuro-psychiatric diseases such as schizophrenia in particular. Several rhesus monkey studies have described homologs of clinically relevant auditory evoked potentials such as pitch-based mismatch negativity, a fronto-central negativity that can be observed when a series of regularly repeating sounds is disrupted by a sound of different tonal frequency. As a result it is well known how differences of tonal frequency are represented in rhesus monkey EEG. However, to date there is no study that systematically quantified how absolute tonal frequency itself is represented. In particular, it is not known if frequency affects rhesus monkey EEG component amplitude and topography in the same way as previously shown for humans. A better understanding of the effect of frequency may strengthen inter-species homology and will provide a more solid foundation on which to build the interpretation of frequency MMN in the rhesus monkey. Using arrays of up to 32 cranial EEG electrodes in 4 rhesus macaques we identified 8 distinct auditory evoked components including the N85, a fronto-central negativity that is the presumed homolog of the human N1. In line with human data, the amplitudes of most components including the N85 peaked around 1000Hz and were strongly attenuated above ∼1750 Hz. Component topography, however, remained largely unaffected by frequency. This latter finding may be consistent with the known absence of certain anatomical structures in the rhesus monkey that are believed to cause the changes in topography in the human by inducing a rotation of generator orientation as a function of tonal frequency. Overall, the findings are consistent with the assumption of a homolog representation of tonal frequency in human and rhesus monkey EEG.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VWkS8v
via IFTTT

Accuracy of KinectOne to quantify kinematics of the upper body

Publication date: Available online 13 April 2016
Source:Gait & Posture
Author(s): Roman P. Kuster, Bernd Heinlein, Christoph M. Bauer, Eveline S. Graf
Motion analysis systems deliver quantitative information, e.g. on the progress of rehabilitation programs aimed at improving range of motion. Markerless systems are of interest for clinical application because they are low-cost and easy to use. The first generation of the Kinect™ sensor showed promising results in validity assessment compared to an established marker-based system. However, no literature is available on the validity of the new ‘Kinect™ for Xbox one’ (KinectOne) in tracking upper body motion. Consequently, this study was conducted to analyze the accuracy and reliability of the KinectOne in tracking upper body motion.Twenty subjects performed shoulder abduction in frontal and scapula plane, flexion, external rotation and horizontal flexion in two conditions (sitting and standing). Arm and trunk motion were analyzed using the KinectOne and compared to a marker-based system. Comparisons were made using Bland Altman statistics and Coefficient of Multiple Correlation.On average, differences between systems of 3.9±4.0° and 0.1±3.8° were found for arm and trunk motion, respectively. Correlation was higher for the arm than for the trunk motion.Based on the observed bias, the accuracy of the KinectOne was found to be adequate to measure arm motion in a clinical setting. Although trunk motion showed a very low absolute bias between the two systems, the KinectOne was not able to track small changes over time. Before the KinectOne can find clinical application, further research is required analyzing whether validity can be improved using a customized tracking algorithm or other sensor placement, and to analyze test-retest reliability.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1SMUvNp
via IFTTT

The effects of altering attentional demands of gait control on the variability of temporal and kinematic parameters.

Publication date: Available online 13 April 2016
Source:Gait & Posture
Author(s): Kenji Tanimoto, Masaya Anan, Tomonori Sawada, Makoto Takahashi, Koichi Shinkoda
The purpose of this study was to investigate the effects of cognitive and visuomotor tasks on gait control in terms of the magnitude and temporal structure of the variability in stride time and lower-limb kinematics measured using inertial sensors. Fourteen healthy young subjects walked on a treadmill for 15min at a self-selected gait speed in the three conditions: normal walking without a concurrent task; walking while performing a cognitive task; and walking while performing a visuomotor task. The time series data of stride time and peak shank angular velocity were generated from acceleration and angular velocity data recorded from both shanks. The mean, coefficient of variation, and fractal scaling exponent α of the time series of these variables and the standard deviation of shank angular velocity over the entire stride cycle were calculated. The cognitive task had an effect on long-range correlations in stride time but not on lower-limb kinematics. The temporal structure of variability in stride time became more random in the cognitive task. The visuomotor task had an effect on lower-limb kinematics. Subjects controlled their swing limb with greater variability and had a more complex adaptive lower-limb movement pattern in the visuomotor task. The effects of the dual tasks on gait control were different for stride time and lower-limb kinematics. These findings suggest that the temporal structure of variability and lower-limb kinematics are useful parameters to detect a change in gait pattern and provide further insight into gait control.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VVT7Nw
via IFTTT

Accuracy of KinectOne to quantify kinematics of the upper body

Publication date: Available online 13 April 2016
Source:Gait & Posture
Author(s): Roman P. Kuster, Bernd Heinlein, Christoph M. Bauer, Eveline S. Graf
Motion analysis systems deliver quantitative information, e.g. on the progress of rehabilitation programs aimed at improving range of motion. Markerless systems are of interest for clinical application because they are low-cost and easy to use. The first generation of the Kinect™ sensor showed promising results in validity assessment compared to an established marker-based system. However, no literature is available on the validity of the new ‘Kinect™ for Xbox one’ (KinectOne) in tracking upper body motion. Consequently, this study was conducted to analyze the accuracy and reliability of the KinectOne in tracking upper body motion.Twenty subjects performed shoulder abduction in frontal and scapula plane, flexion, external rotation and horizontal flexion in two conditions (sitting and standing). Arm and trunk motion were analyzed using the KinectOne and compared to a marker-based system. Comparisons were made using Bland Altman statistics and Coefficient of Multiple Correlation.On average, differences between systems of 3.9±4.0° and 0.1±3.8° were found for arm and trunk motion, respectively. Correlation was higher for the arm than for the trunk motion.Based on the observed bias, the accuracy of the KinectOne was found to be adequate to measure arm motion in a clinical setting. Although trunk motion showed a very low absolute bias between the two systems, the KinectOne was not able to track small changes over time. Before the KinectOne can find clinical application, further research is required analyzing whether validity can be improved using a customized tracking algorithm or other sensor placement, and to analyze test-retest reliability.



from #Audiology via ola Kala on Inoreader http://ift.tt/1SMUvNp
via IFTTT

The effects of altering attentional demands of gait control on the variability of temporal and kinematic parameters.

Publication date: Available online 13 April 2016
Source:Gait & Posture
Author(s): Kenji Tanimoto, Masaya Anan, Tomonori Sawada, Makoto Takahashi, Koichi Shinkoda
The purpose of this study was to investigate the effects of cognitive and visuomotor tasks on gait control in terms of the magnitude and temporal structure of the variability in stride time and lower-limb kinematics measured using inertial sensors. Fourteen healthy young subjects walked on a treadmill for 15min at a self-selected gait speed in the three conditions: normal walking without a concurrent task; walking while performing a cognitive task; and walking while performing a visuomotor task. The time series data of stride time and peak shank angular velocity were generated from acceleration and angular velocity data recorded from both shanks. The mean, coefficient of variation, and fractal scaling exponent α of the time series of these variables and the standard deviation of shank angular velocity over the entire stride cycle were calculated. The cognitive task had an effect on long-range correlations in stride time but not on lower-limb kinematics. The temporal structure of variability in stride time became more random in the cognitive task. The visuomotor task had an effect on lower-limb kinematics. Subjects controlled their swing limb with greater variability and had a more complex adaptive lower-limb movement pattern in the visuomotor task. The effects of the dual tasks on gait control were different for stride time and lower-limb kinematics. These findings suggest that the temporal structure of variability and lower-limb kinematics are useful parameters to detect a change in gait pattern and provide further insight into gait control.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VVT7Nw
via IFTTT

Accuracy of KinectOne to quantify kinematics of the upper body

Publication date: Available online 13 April 2016
Source:Gait & Posture
Author(s): Roman P. Kuster, Bernd Heinlein, Christoph M. Bauer, Eveline S. Graf
Motion analysis systems deliver quantitative information, e.g. on the progress of rehabilitation programs aimed at improving range of motion. Markerless systems are of interest for clinical application because they are low-cost and easy to use. The first generation of the Kinect™ sensor showed promising results in validity assessment compared to an established marker-based system. However, no literature is available on the validity of the new ‘Kinect™ for Xbox one’ (KinectOne) in tracking upper body motion. Consequently, this study was conducted to analyze the accuracy and reliability of the KinectOne in tracking upper body motion.Twenty subjects performed shoulder abduction in frontal and scapula plane, flexion, external rotation and horizontal flexion in two conditions (sitting and standing). Arm and trunk motion were analyzed using the KinectOne and compared to a marker-based system. Comparisons were made using Bland Altman statistics and Coefficient of Multiple Correlation.On average, differences between systems of 3.9±4.0° and 0.1±3.8° were found for arm and trunk motion, respectively. Correlation was higher for the arm than for the trunk motion.Based on the observed bias, the accuracy of the KinectOne was found to be adequate to measure arm motion in a clinical setting. Although trunk motion showed a very low absolute bias between the two systems, the KinectOne was not able to track small changes over time. Before the KinectOne can find clinical application, further research is required analyzing whether validity can be improved using a customized tracking algorithm or other sensor placement, and to analyze test-retest reliability.



from #Audiology via ola Kala on Inoreader http://ift.tt/1SMUvNp
via IFTTT

The effects of altering attentional demands of gait control on the variability of temporal and kinematic parameters.

Publication date: Available online 13 April 2016
Source:Gait & Posture
Author(s): Kenji Tanimoto, Masaya Anan, Tomonori Sawada, Makoto Takahashi, Koichi Shinkoda
The purpose of this study was to investigate the effects of cognitive and visuomotor tasks on gait control in terms of the magnitude and temporal structure of the variability in stride time and lower-limb kinematics measured using inertial sensors. Fourteen healthy young subjects walked on a treadmill for 15min at a self-selected gait speed in the three conditions: normal walking without a concurrent task; walking while performing a cognitive task; and walking while performing a visuomotor task. The time series data of stride time and peak shank angular velocity were generated from acceleration and angular velocity data recorded from both shanks. The mean, coefficient of variation, and fractal scaling exponent α of the time series of these variables and the standard deviation of shank angular velocity over the entire stride cycle were calculated. The cognitive task had an effect on long-range correlations in stride time but not on lower-limb kinematics. The temporal structure of variability in stride time became more random in the cognitive task. The visuomotor task had an effect on lower-limb kinematics. Subjects controlled their swing limb with greater variability and had a more complex adaptive lower-limb movement pattern in the visuomotor task. The effects of the dual tasks on gait control were different for stride time and lower-limb kinematics. These findings suggest that the temporal structure of variability and lower-limb kinematics are useful parameters to detect a change in gait pattern and provide further insight into gait control.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VVT7Nw
via IFTTT

Video Feedback in Key Word Signing Training for Preservice Direct Support Staff

Purpose
Research has demonstrated that formal training is essential for professionals to learn key word signing. Yet, the particular didactic strategies have not been studied. Therefore, this study compared the effectiveness of verbal and video feedback in a key word signing training for future direct support staff.
Method
Forty-nine future direct support staff were randomly assigned to 1 of 3 key word signing training programs: modeling and verbal feedback (classical method [CM]), additional video feedback (+ViF), and additional video feedback and photo reminder (+ViF/R). Signing accuracy and training acceptability were measured 1 week after and 7 months after training.
Results
Participants from the +ViF/R program achieved significantly higher signing accuracy compared with the CM group. Acceptability ratings did not differ between any of the groups.
Conclusion
Results suggest that at an equal time investment, the programs containing more training components were more effective. Research on the effect of rehearsal on signing maintenance is warranted.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1T4to32
via IFTTT

Video Feedback in Key Word Signing Training for Preservice Direct Support Staff

Purpose
Research has demonstrated that formal training is essential for professionals to learn key word signing. Yet, the particular didactic strategies have not been studied. Therefore, this study compared the effectiveness of verbal and video feedback in a key word signing training for future direct support staff.
Method
Forty-nine future direct support staff were randomly assigned to 1 of 3 key word signing training programs: modeling and verbal feedback (classical method [CM]), additional video feedback (+ViF), and additional video feedback and photo reminder (+ViF/R). Signing accuracy and training acceptability were measured 1 week after and 7 months after training.
Results
Participants from the +ViF/R program achieved significantly higher signing accuracy compared with the CM group. Acceptability ratings did not differ between any of the groups.
Conclusion
Results suggest that at an equal time investment, the programs containing more training components were more effective. Research on the effect of rehearsal on signing maintenance is warranted.

from #Audiology via ola Kala on Inoreader http://ift.tt/1T4to32
via IFTTT

Video Feedback in Key Word Signing Training for Preservice Direct Support Staff

Purpose
Research has demonstrated that formal training is essential for professionals to learn key word signing. Yet, the particular didactic strategies have not been studied. Therefore, this study compared the effectiveness of verbal and video feedback in a key word signing training for future direct support staff.
Method
Forty-nine future direct support staff were randomly assigned to 1 of 3 key word signing training programs: modeling and verbal feedback (classical method [CM]), additional video feedback (+ViF), and additional video feedback and photo reminder (+ViF/R). Signing accuracy and training acceptability were measured 1 week after and 7 months after training.
Results
Participants from the +ViF/R program achieved significantly higher signing accuracy compared with the CM group. Acceptability ratings did not differ between any of the groups.
Conclusion
Results suggest that at an equal time investment, the programs containing more training components were more effective. Research on the effect of rehearsal on signing maintenance is warranted.

from #Audiology via ola Kala on Inoreader http://ift.tt/1T4to32
via IFTTT

A Systematic Review to Define the Speech and Language Benefit of Early (

Objective: This review aimed to evaluate the additional benefit of pediatric cochlear implantation before 12 months of age considering improved speech and language development and auditory performance. Materials and Methods: We conducted a search in PubMed, EMBASE and CINAHL databases and included studies comparing groups with different ages at implantation and assessing speech perception and speech production, receptive language and/or auditory performance. We included studies with a high directness of evidence (DoE). Results: We retrieved 3,360 articles. Ten studies with a high DoE were included. Four articles with medium DoE were discussed in addition. Six studies compared infants implanted before 12 months with children implanted between 12 and 24 months. Follow-up ranged from 6 months to 9 years. Cochlear implantation before the age of 2 years is beneficial according to one speech perception score (phonetically balanced kindergarten combined with consonant-nucleus-consonant) but not on Glendonald auditory screening procedure scores. Implantation before 12 months resulted in better speech production (diagnostic evaluation of articulation and phonology and infant-toddler meaningful auditory integration scale), auditory performance (Categories of Auditory Performance-II score) and receptive language scores (2 out of 5; Preschool Language Scale combined with oral and written language skills and Peabody Picture Vocabulary Test). Conclusions: The current best evidence lacks level 1 evidence studies and consists mainly of cohort studies with a moderate to high risk of bias. Included studies showed consistent evidence that cochlear implantation should be performed early in life, but evidence is inconsistent on all speech and language outcome measures regarding the additional benefit of implantation before the age of 12 months. Long-term follow-up studies are necessary to provide insight on additional benefits of early pediatric cochlear implantation.
Audiol Neurotol 2016;21:113-126

from #Audiology via xlomafota13 on Inoreader http://ift.tt/260nAyK
via IFTTT

Application of the transtheoretical model of behaviour change for identifying older clients’ readiness for hearing rehabilitation during history-taking in audiology appointments

10.3109/14992027.2015.1136080<br/>Katie Ekberg

from #Audiology via ola Kala on Inoreader http://ift.tt/1Mui3tN
via IFTTT

Application of the transtheoretical model of behaviour change for identifying older clients’ readiness for hearing rehabilitation during history-taking in audiology appointments

10.3109/14992027.2015.1136080<br/>Katie Ekberg

from #Audiology via ola Kala on Inoreader http://ift.tt/1Mui3tN
via IFTTT

Measurements of high-frequency acoustic scattering from glacially eroded rock outcrops

cm_sbs_024_plain.png

Measurements of acoustic backscattering from glacially eroded rock outcrops were made off the coast of Sandefjord, Norway using a high-frequency synthetic aperture sonar (SAS) system. A method by which scattering strength can be estimated from data collected by a SAS system is detailed, as well as a method to estimate an effective calibration parameter for the system. Scattering strength measurements from very smooth areas of the rock outcrops agree with predictions from both the small-slope approximation and perturbation theory, and range between −33 and −26 dB at 20° grazing angle. Scattering strength measurements from very rough areas of the rock outcrops agree with the sine-squared shape of the empirical Lambertian model and fall between −30 and −20 dB at 20° grazing angle. Both perturbation theory and the small-slope approximation are expected to be inaccurate for the very rough area, and overestimate scattering strength by 8 dB or more for all measurements of very rough surfaces. Supporting characterization of the environment was performed in the form of geoacoustic and roughness parameter estimates.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1qoUic8
via IFTTT

Audibility of dispersion error in room acoustic finite-difference time-domain simulation as a function of simulation distance

cm_sbs_024_plain.png

Finite-difference time-domain(FDTD) simulation has been a popular area of research in room acoustics due to its capability to simulate wave phenomena in a wide bandwidth directly in the time-domain. A downside of the method is that it introduces a direction and frequency dependent error to the simulated sound field due to the non-linear dispersion relation of the discrete system. In this study, the perceptual threshold of the dispersion error is measured in three-dimensional FDTD schemes as a function of simulation distance. Dispersion error is evaluated for three different explicit, non-staggered FDTD schemes using the numerical wavenumber in the direction of the worst-case error of each scheme. It is found that the thresholds for the different schemes do not vary significantly when the phase velocity error level is fixed. The thresholds are found to vary significantly between the different sound samples. The measured threshold for the audibility of dispersion error at the probability level of 82% correct discrimination for three-alternative forced choice is found to be 9.1 m of propagation in a free field, that leads to a maximum group delay error of 1.8 ms at 20 kHz with the chosen phase velocity error level of 2%.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1qHWz2N
via IFTTT

On the variation of interaural time differences with frequency

cm_sbs_024_plain.png

Interaural time difference(ITD) is a major cue to sound localization in humans and animals. For a given subject and position in space, ITD depends on frequency. This variation is analyzed here using a head related transfer functions (HRTFs) database collected from the literature and comprising human HRTFs from 130 subjects and animal HRTFs from six specimens of different species. For humans, the ITD is found to vary with frequency in a way that shows consistent differences with respect to a spherical head model. Maximal ITD values were found to be about 800 μs in low frequencies and 600 μs in high frequencies. The ITD variation with frequency (up to 200 μs for some positions) occurs within the frequency range where ITD is used to judge the lateral position of a sound source. In addition, ITD varies substantially within the bandwidth of a single auditory filter, leading to systematic differences between envelope and fine-structure ITDs. Because the frequency-dependent pattern of ITD does not display spherical symmetries, it potentially provides cues to elevation and resolves front/back confusion. The fact that the relation between position and ITDs strongly depends on the sound's spectrum in turn suggests that humans and animals make use of this relationship for the localization of sounds.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1qoUfNy
via IFTTT

Effects of language experience on pre-categorical perception: Distinguishing general from specialized processes in speech perception

cm_sbs_024_plain.png

Cross-language differences in speech perception have traditionally been linked to phonological categories, but it has become increasingly clear that language experience has effects beginning at early stages of perception, which blurs the accepted distinctions between general and speech-specific processing. The present experiments explored this distinction by playing stimuli to English and Japanese speakers that manipulated the acoustic form of English /r/ and /l/, in order to determine how acoustically natural and phonologically identifiable a stimulus must be for cross-language discrimination differences to emerge. Discrimination differences were found for stimuli that did not sound subjectively like speech or /r/ and /l/, but overall they were strongly linked to phonological categorization. The results thus support the view that phonological categories are an important source of cross-language differences, but also show that these differences can extend to stimuli that do not clearly sound like speech.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1qHWz2G
via IFTTT

Application of the transtheoretical model of behaviour change for identifying older clients’ readiness for hearing rehabilitation during history-taking in audiology appointments

10.3109/14992027.2015.1136080<br/>Katie Ekberg

from #Audiology via ola Kala on Inoreader http://ift.tt/1N8u3kJ
via IFTTT

Application of the transtheoretical model of behaviour change for identifying older clients’ readiness for hearing rehabilitation during history-taking in audiology appointments

10.3109/14992027.2015.1136080<br/>Katie Ekberg

from #Audiology via ola Kala on Inoreader http://ift.tt/1N8u3kJ
via IFTTT

Application of the transtheoretical model of behaviour change for identifying older clients’ readiness for hearing rehabilitation during history-taking in audiology appointments

10.3109/14992027.2015.1136080<br/>Katie Ekberg

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1N8u3kJ
via IFTTT

Self-Fitting Hearing Aids: Status Quo and Future Predictions

A self-contained, self-fitting hearing aid (SFHA) is a device that enables the user to perform both threshold measurements leading to a prescribed hearing aid setting and fine-tuning, without the need for audiological support or access to other equipment. The SFHA has been proposed as a potential solution to address unmet hearing health care in developing countries and remote locations in the developed world and is considered a means to lower cost and increase uptake of hearing aids in developed countries. This article reviews the status of the SFHA and the evidence for its feasibility and challenges and predicts where it is heading. Devices that can be considered partly or fully self-fitting without audiological support were identified in the direct-to-consumer market. None of these devices are considered self-contained as they require access to other hardware such as a proprietary interface, computer, smartphone, or tablet for manipulation. While there is evidence that self-administered fitting processes can provide valid and reliable results, their success relies on user-friendly device designs and interfaces and easy-to-interpret instructions. Until these issues have been sufficiently addressed, optional assistance with the self-fitting process and on-going use of SFHAs is recommended. Affordability and a sustainable delivery system remain additional challenges for the SFHA in developing countries. Future predictions include a growth in self-fitting products, with most future SFHAs consisting of earpieces that connect wirelessly with a smartphone and providers offering assistance through a telehealth infrastructure, and the integration of SFHAs into the traditional hearing health-care model.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/20Apthy
via IFTTT

Self-Fitting Hearing Aids: Status Quo and Future Predictions

A self-contained, self-fitting hearing aid (SFHA) is a device that enables the user to perform both threshold measurements leading to a prescribed hearing aid setting and fine-tuning, without the need for audiological support or access to other equipment. The SFHA has been proposed as a potential solution to address unmet hearing health care in developing countries and remote locations in the developed world and is considered a means to lower cost and increase uptake of hearing aids in developed countries. This article reviews the status of the SFHA and the evidence for its feasibility and challenges and predicts where it is heading. Devices that can be considered partly or fully self-fitting without audiological support were identified in the direct-to-consumer market. None of these devices are considered self-contained as they require access to other hardware such as a proprietary interface, computer, smartphone, or tablet for manipulation. While there is evidence that self-administered fitting processes can provide valid and reliable results, their success relies on user-friendly device designs and interfaces and easy-to-interpret instructions. Until these issues have been sufficiently addressed, optional assistance with the self-fitting process and on-going use of SFHAs is recommended. Affordability and a sustainable delivery system remain additional challenges for the SFHA in developing countries. Future predictions include a growth in self-fitting products, with most future SFHAs consisting of earpieces that connect wirelessly with a smartphone and providers offering assistance through a telehealth infrastructure, and the integration of SFHAs into the traditional hearing health-care model.



from #Audiology via ola Kala on Inoreader http://ift.tt/20Apthy
via IFTTT

Self-Fitting Hearing Aids: Status Quo and Future Predictions

A self-contained, self-fitting hearing aid (SFHA) is a device that enables the user to perform both threshold measurements leading to a prescribed hearing aid setting and fine-tuning, without the need for audiological support or access to other equipment. The SFHA has been proposed as a potential solution to address unmet hearing health care in developing countries and remote locations in the developed world and is considered a means to lower cost and increase uptake of hearing aids in developed countries. This article reviews the status of the SFHA and the evidence for its feasibility and challenges and predicts where it is heading. Devices that can be considered partly or fully self-fitting without audiological support were identified in the direct-to-consumer market. None of these devices are considered self-contained as they require access to other hardware such as a proprietary interface, computer, smartphone, or tablet for manipulation. While there is evidence that self-administered fitting processes can provide valid and reliable results, their success relies on user-friendly device designs and interfaces and easy-to-interpret instructions. Until these issues have been sufficiently addressed, optional assistance with the self-fitting process and on-going use of SFHAs is recommended. Affordability and a sustainable delivery system remain additional challenges for the SFHA in developing countries. Future predictions include a growth in self-fitting products, with most future SFHAs consisting of earpieces that connect wirelessly with a smartphone and providers offering assistance through a telehealth infrastructure, and the integration of SFHAs into the traditional hearing health-care model.



from #Audiology via ola Kala on Inoreader http://ift.tt/20Apthy
via IFTTT