Δευτέρα 13 Ιουνίου 2016

Sensitivity to Interaural Time Differences Conveyed in the Stimulus Envelope: Estimating Inputs of Binaural Neurons Through the Temporal Analysis of Spike Trains

Abstract

Sound-source localization in the horizontal plane relies on detecting small differences in the timing and level of the sound at the two ears, including differences in the timing of the modulated envelopes of high-frequency sounds (envelope interaural time differences (ITDs)). We investigated responses of single neurons in the inferior colliculus (IC) to a wide range of envelope ITDs and stimulus envelope shapes. By a novel means of visualizing neural activity relative to different portions of the periodic stimulus envelope at each ear, we demonstrate the role of neuron-specific excitatory and inhibitory inputs in creating ITD sensitivity (or the lack of it) depending on the specific shape of the stimulus envelope. The underlying binaural brain circuitry and synaptic parameters were modeled individually for each neuron to account for neuron-specific activity patterns. The model explains the effects of envelope shapes on sensitivity to envelope ITDs observed in both normal-hearing listeners and in neural data, and has consequences for understanding how ITD information in stimulus envelopes might be maximized in users of bilateral cochlear implants—for whom ITDs conveyed in the stimulus envelope are the only ITD cues available.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Olz3mS
via IFTTT

Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images

Purpose
Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time.
Method
The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm.
Results
Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed.
Conclusion
Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies.

from #Audiology via ola Kala on Inoreader http://ift.tt/28zSMG4
via IFTTT

Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images

Purpose
Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time.
Method
The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm.
Results
Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed.
Conclusion
Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/28zSMG4
via IFTTT

Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images

Purpose
Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time.
Method
The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm.
Results
Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed.
Conclusion
Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies.

from #Audiology via ola Kala on Inoreader http://ift.tt/28zSMG4
via IFTTT

Measured depth-dependence of waveguide invariant in shallow water with a summer profile

cm_sbs_024_plain.png

Acoustic-intensity striation patterns were measured in the time-frequency domain using an L-shaped array and two simultaneously towed broadband (350–650 Hz) sources at depths above and below the thermocline under summer profile conditions. Distributions of the waveguide invariant parameter β, extracted from the acoustic striation patterns, peak at different values when receivers are above or below the thermocline for a source that is below the thermocline. However, the distributions show similar characteristics when the source is above the thermocline. Experimental results are verified by a numerical analysis of phase slowness, group slowness, and relative amplitudes of acoustic modes.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YlPLoe
via IFTTT

Effect of acoustic fine structure cues on the recognition of auditory-only and audiovisual speech

cm_sbs_024_plain.png

This study addressed the hypothesis that an improvement in speech recognition due to combined envelope and fine structure cues is greater in the audiovisual than the auditory modality. Normal hearing listeners were presented with envelope vocoded speech in combination with low-pass filtered speech. The benefit of adding acoustic low-frequency fine structure to acoustic envelope cues was significantly greater for audiovisual than for auditory-only speech. It is suggested that this is due to complementary information of the different acoustic and visual cues. The results have potential implications for the assessment of bimodal cochlear implant fittings or electroacoustic stimulation.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1S2rvAf
via IFTTT

Measured depth-dependence of waveguide invariant in shallow water with a summer profile

Acoustic-intensity striation patterns were measured in the time-frequency domain using an L-shaped array and two simultaneously towed broadband (350–650 Hz) sources at depths above and below the thermocline under summer profile conditions. Distributions of the waveguide invariant parameter β, extracted from the acoustic striation patterns, peak at different values when receivers are above or below the thermocline for a source that is below the thermocline. However, the distributions show similar characteristics when the source is above the thermocline. Experimental results are verified by a numerical analysis of phase slowness, group slowness, and relative amplitudes of acoustic modes.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1YlPLoe
via IFTTT