Κυριακή 1 Ιουλίου 2018

Sensitivity to Melody, Rhythm, and Beat in Supporting Speech-in-Noise Perception in Young Adults

Objectives: Musicians appear to have an enhanced ability to perceive speech-in-noise, prompting suggestions that musical training could be used to help people who struggle to communicate in noisy environments. This study assessed the role of sensitivity to beat, rhythm, and melody in supporting speech-in-noise perception. Design: This is an exploratory study based on correlation. The study included 24 normally hearing young adult participants with a wide range of musical training and experience. Formal and informal musical experience was measured with the training subscale of the Goldsmiths’ Musical Sophistication Index. Speech reception thresholds (SRT) were measured using the Matrix Sentence Test and three different speech-spectrum-shaped noise maskers: unmodulated and sinusoidally amplitude-modulated (modulation frequency, fm = 8 Hz; modulation depths: 60 and 80%). Primary predictors were measures of sensitivity to beat, rhythm, and melody. Secondary predictors were pure-tone frequency discrimination and auditory working memory (digit span). Any contributions from these two predictors were to be controlled for as appropriate. Results: Participants with more musical experience and greater sensitivity to rhythm, beat, and melody had better SRTs. Sensitivity to beat was more strongly linked with SRT than sensitivity to either rhythm or melody. This relationship remained strong even after factoring out contributions from frequency discrimination and auditory working memory. Conclusions: Sensitivity to beat predicted SRTs in unmodulated and modulated noise. We propose that this sensitivity maximizes benefit from fluctuations in signal-to-noise ratio through temporal orienting of attention to perceptually salient parts of the signal. Beat perception may be a good candidate for targeted training aimed at enhancing speech perception when listening in noise. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Acknowledgments: The research was funded with Medical Research Council intramural funding grant U135097130. J.G.B. was funded through the Nottingham University Hospitals National Health Service Trust Flexibility and Sustainability Fund. The authors have no conflicts of interest to disclose. Address for correspondence: Johanna G. Barry, Nottingham University Hospital Trust, Queen’s Medical Centre, Nottingham NG7 2UH, United Kingdom. E-mail: johannagbarry2@gmail.com Received August 29, 2017; accepted April 30, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KnRCuq
via IFTTT

Sensitivity to Melody, Rhythm, and Beat in Supporting Speech-in-Noise Perception in Young Adults

Objectives: Musicians appear to have an enhanced ability to perceive speech-in-noise, prompting suggestions that musical training could be used to help people who struggle to communicate in noisy environments. This study assessed the role of sensitivity to beat, rhythm, and melody in supporting speech-in-noise perception. Design: This is an exploratory study based on correlation. The study included 24 normally hearing young adult participants with a wide range of musical training and experience. Formal and informal musical experience was measured with the training subscale of the Goldsmiths’ Musical Sophistication Index. Speech reception thresholds (SRT) were measured using the Matrix Sentence Test and three different speech-spectrum-shaped noise maskers: unmodulated and sinusoidally amplitude-modulated (modulation frequency, fm = 8 Hz; modulation depths: 60 and 80%). Primary predictors were measures of sensitivity to beat, rhythm, and melody. Secondary predictors were pure-tone frequency discrimination and auditory working memory (digit span). Any contributions from these two predictors were to be controlled for as appropriate. Results: Participants with more musical experience and greater sensitivity to rhythm, beat, and melody had better SRTs. Sensitivity to beat was more strongly linked with SRT than sensitivity to either rhythm or melody. This relationship remained strong even after factoring out contributions from frequency discrimination and auditory working memory. Conclusions: Sensitivity to beat predicted SRTs in unmodulated and modulated noise. We propose that this sensitivity maximizes benefit from fluctuations in signal-to-noise ratio through temporal orienting of attention to perceptually salient parts of the signal. Beat perception may be a good candidate for targeted training aimed at enhancing speech perception when listening in noise. This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Acknowledgments: The research was funded with Medical Research Council intramural funding grant U135097130. J.G.B. was funded through the Nottingham University Hospitals National Health Service Trust Flexibility and Sustainability Fund. The authors have no conflicts of interest to disclose. Address for correspondence: Johanna G. Barry, Nottingham University Hospital Trust, Queen’s Medical Centre, Nottingham NG7 2UH, United Kingdom. E-mail: johannagbarry2@gmail.com Received August 29, 2017; accepted April 30, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2KnRCuq
via IFTTT

Effect of force magnitude of touch on the components of postural sway

alertIcon.gif

Publication date: September 2018
Source:Gait & Posture, Volume 65
Author(s): Janina Manzieri Prado-Rico, Sandra Regina Alouche, Ariani Cardoso Sodré, Rafaela Barroso de Souza Costa Garbus, Sandra Maria Sbeghen Ferreira de Freitas
BackgroundLightly touching the tip of the index finger on an external surface reduces the postural sway during upright standing due to the additional somatosensory information provided by the touch to the postural control system. But when the individuals apply more force, it provides more mechanical support. However, because most of the studies investigated only two levels of force, whether the control mechanisms of postural sway are affected by the different force levels was unknown.Research questionTo examine the influence of the magnitude of force (up to 1, 2, 4, 6 or 8 N) applied to the touch bar on the mechanisms used to control the postural sway during quiet standing with eyes open or closed.MethodsTen young right-handed adults stood for 35 s on a force platform, with feet apart, while touched a rigid bar with different force levels with eyes open or closed. The amplitude and velocity of the center of pressure and its components, Rambling and Trembling trajectories, respectively, related to more supraspinal and spinal control mechanisms, were assessed.ResultsThe touch reduced all trajectories, mainly of the Rambling component and with closed eyes. There was a floor effect of the touch force as amplitudes and velocities were minimal at 4 N of force.SignificanceThe component of postural sway under the supraspinal neural control is more affected by different force magnitudes applied to the touch bar.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Kwcajs
via IFTTT

Working towards an objective segmental assessment of trunk control in children with cerebral palsy

S09666362.gif

Publication date: Available online 30 June 2018
Source:Gait & Posture
Author(s): María B. Sánchez, Ian Loram, Paul Holmes, John Darby, Penelope B. Butler
BackgroundPhysical therapy evaluations of motor control are currently based on subjective clinical assessments. Despite validation, these can still be inconsistent between therapists and between clinics, compromising the process of validating a therapeutic intervention and the subsequent generation of evidence-based practice (EBP) guidelines. EBP benefits from well-defined objective measurements that complement existing subjective assessments.Research questionThe aim of this study was to develop an objective measure of head/trunk control in children with Cerebral Palsy (CP) using previously developed video-based methods of head/trunk alignment and absence of external support and compare these with the existing subjective Segmental Assessment of Trunk Control (SATCo).MethodsTwelve children with CP were recruited and an average of 3 (±1.1) SATCo tests performed per child. The full SATCo was concurrently video-recorded from a sagittal view; markers were placed on specific landmarks of the head, trunk and pelvis to track and estimate head/trunk segment position. A simplified objective rule was created for control and used on videos showing no external support. This replicated the clinical parameters and enabled identification of the segmental-loss-of-control. The subjectively and objectively identified segmental-loss-of-control were compared using a Pearson Correlation Coefficient.ResultsAn angular-threshold of 17° from alignment showed the minimum bias between the subjectively and the objectively measured segmental-loss-of-control (mean error =−0.11 and RMSE = 1.5) and a significant correlation (r = 0.78, r2 = 0.61, p < .01).SignificanceThis study showed that simple objective video-based measurements can be used to reconstruct the subjective assessment of segmental head/trunk control. This suggests that a clinically-friendly video-based objective measure has future potential to complement subjective assessments and to assist in the generation of EBP guidelines. Further development will increase the information that can be extracted from video images and enable generation of a fully automated objective measure.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ky3Psl
via IFTTT

Effect of force magnitude of touch on the components of postural sway

alertIcon.gif

Publication date: September 2018
Source:Gait & Posture, Volume 65
Author(s): Janina Manzieri Prado-Rico, Sandra Regina Alouche, Ariani Cardoso Sodré, Rafaela Barroso de Souza Costa Garbus, Sandra Maria Sbeghen Ferreira de Freitas
BackgroundLightly touching the tip of the index finger on an external surface reduces the postural sway during upright standing due to the additional somatosensory information provided by the touch to the postural control system. But when the individuals apply more force, it provides more mechanical support. However, because most of the studies investigated only two levels of force, whether the control mechanisms of postural sway are affected by the different force levels was unknown.Research questionTo examine the influence of the magnitude of force (up to 1, 2, 4, 6 or 8 N) applied to the touch bar on the mechanisms used to control the postural sway during quiet standing with eyes open or closed.MethodsTen young right-handed adults stood for 35 s on a force platform, with feet apart, while touched a rigid bar with different force levels with eyes open or closed. The amplitude and velocity of the center of pressure and its components, Rambling and Trembling trajectories, respectively, related to more supraspinal and spinal control mechanisms, were assessed.ResultsThe touch reduced all trajectories, mainly of the Rambling component and with closed eyes. There was a floor effect of the touch force as amplitudes and velocities were minimal at 4 N of force.SignificanceThe component of postural sway under the supraspinal neural control is more affected by different force magnitudes applied to the touch bar.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Kwcajs
via IFTTT

Working towards an objective segmental assessment of trunk control in children with cerebral palsy

S09666362.gif

Publication date: Available online 30 June 2018
Source:Gait & Posture
Author(s): María B. Sánchez, Ian Loram, Paul Holmes, John Darby, Penelope B. Butler
BackgroundPhysical therapy evaluations of motor control are currently based on subjective clinical assessments. Despite validation, these can still be inconsistent between therapists and between clinics, compromising the process of validating a therapeutic intervention and the subsequent generation of evidence-based practice (EBP) guidelines. EBP benefits from well-defined objective measurements that complement existing subjective assessments.Research questionThe aim of this study was to develop an objective measure of head/trunk control in children with Cerebral Palsy (CP) using previously developed video-based methods of head/trunk alignment and absence of external support and compare these with the existing subjective Segmental Assessment of Trunk Control (SATCo).MethodsTwelve children with CP were recruited and an average of 3 (±1.1) SATCo tests performed per child. The full SATCo was concurrently video-recorded from a sagittal view; markers were placed on specific landmarks of the head, trunk and pelvis to track and estimate head/trunk segment position. A simplified objective rule was created for control and used on videos showing no external support. This replicated the clinical parameters and enabled identification of the segmental-loss-of-control. The subjectively and objectively identified segmental-loss-of-control were compared using a Pearson Correlation Coefficient.ResultsAn angular-threshold of 17° from alignment showed the minimum bias between the subjectively and the objectively measured segmental-loss-of-control (mean error =−0.11 and RMSE = 1.5) and a significant correlation (r = 0.78, r2 = 0.61, p < .01).SignificanceThis study showed that simple objective video-based measurements can be used to reconstruct the subjective assessment of segmental head/trunk control. This suggests that a clinically-friendly video-based objective measure has future potential to complement subjective assessments and to assist in the generation of EBP guidelines. Further development will increase the information that can be extracted from video images and enable generation of a fully automated objective measure.



from #Audiology via ola Kala on Inoreader https://ift.tt/2Ky3Psl
via IFTTT

Impedances of the inner and middle ear estimated from intracochlear sound pressures in normal human temporal bones

S03785955.gif

Publication date: Available online 30 June 2018
Source:Hearing Research
Author(s): Darcy L. Frear, Xiying Guan, Christof Stieger, John J. Rosowski, Hideko Heidi Nakajima
For almost a decade, we have measured intracochlear sound pressures evoked by air conducted (AC) sound presented to the ear canal in many fresh human cadaveric specimens. Similar measurements were also obtained during round window (RW) mechanical stimulation in multiple specimens. In the present study, we use our accumulated data of intracochlear pressures and simultaneous velocity measurements of the stapes or RW to determine acoustic impedances of the cochlear partition, RW, and the leakage paths from scala vestibuli and scala tympani, as well as the reverse middle ear impedance. With these impedances, we develop a computational lumped-element model of the normal ear that illuminates fundamental mechanisms of sound transmission.To calculate the impedances for our model, we use data that passes strict inclusion criteria of: (a) normal middle-ear transfer function defined as the ratio of stapes velocity to ear-canal sound pressure, (b) no evidence of air within the inner ear, and (c) tight control of the pressure sensor sensitivity. After this strict screening, updated normal means, as well as individual representative data, of ossicular velocities and intracochlear pressures for AC and RW stimulation are used to calculate impedances. This work demonstrates the existence and the value of physiological acoustic leak impedances that can sometimes contribute significantly to sound transmission for some stimulation modalities. This model allows understanding of human sound transmission mechanisms for various sound stimulation methods such as AC, RW, and bone conduction, as well as sound transmission related to otoacoustic emissions.



from #Audiology via ola Kala on Inoreader https://ift.tt/2yVnUaJ
via IFTTT

Impedances of the inner and middle ear estimated from intracochlear sound pressures in normal human temporal bones

S03785955.gif

Publication date: Available online 30 June 2018
Source:Hearing Research
Author(s): Darcy L. Frear, Xiying Guan, Christof Stieger, John J. Rosowski, Hideko Heidi Nakajima
For almost a decade, we have measured intracochlear sound pressures evoked by air conducted (AC) sound presented to the ear canal in many fresh human cadaveric specimens. Similar measurements were also obtained during round window (RW) mechanical stimulation in multiple specimens. In the present study, we use our accumulated data of intracochlear pressures and simultaneous velocity measurements of the stapes or RW to determine acoustic impedances of the cochlear partition, RW, and the leakage paths from scala vestibuli and scala tympani, as well as the reverse middle ear impedance. With these impedances, we develop a computational lumped-element model of the normal ear that illuminates fundamental mechanisms of sound transmission.To calculate the impedances for our model, we use data that passes strict inclusion criteria of: (a) normal middle-ear transfer function defined as the ratio of stapes velocity to ear-canal sound pressure, (b) no evidence of air within the inner ear, and (c) tight control of the pressure sensor sensitivity. After this strict screening, updated normal means, as well as individual representative data, of ossicular velocities and intracochlear pressures for AC and RW stimulation are used to calculate impedances. This work demonstrates the existence and the value of physiological acoustic leak impedances that can sometimes contribute significantly to sound transmission for some stimulation modalities. This model allows understanding of human sound transmission mechanisms for various sound stimulation methods such as AC, RW, and bone conduction, as well as sound transmission related to otoacoustic emissions.



from #Audiology via ola Kala on Inoreader https://ift.tt/2yVnUaJ
via IFTTT