Δευτέρα 22 Φεβρουαρίου 2016

The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing.

Objectives: The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Design: Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. Results: The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Conclusions: Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1WFxvD5
via IFTTT

A Randomized Control Trial: Supplementing Hearing Aid Use with Listening and Communication Enhancement (LACE) Auditory Training.

Objective: To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. Design: A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling-the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. Results: No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Conclusions: Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone. Potential benefits of AT may be different than those assessed by the performance and self-report measures utilized here. Individual differences not assessed in this study should be examined to evaluate whether AT with LACE has any benefits for particular individuals. Clinically, these findings suggest that audiologists may want to temper the expectations of their patients who embark on LACE training. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Qzd6Bk
via IFTTT

The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing.

Objectives: The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Design: Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. Results: The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Conclusions: Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (

from #Audiology via ola Kala on Inoreader http://ift.tt/1WFxvD5
via IFTTT

A Randomized Control Trial: Supplementing Hearing Aid Use with Listening and Communication Enhancement (LACE) Auditory Training.

Objective: To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. Design: A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling-the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. Results: No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Conclusions: Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone. Potential benefits of AT may be different than those assessed by the performance and self-report measures utilized here. Individual differences not assessed in this study should be examined to evaluate whether AT with LACE has any benefits for particular individuals. Clinically, these findings suggest that audiologists may want to temper the expectations of their patients who embark on LACE training. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/1Qzd6Bk
via IFTTT

The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing.

Objectives: The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Design: Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. Results: The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Conclusions: Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (

from #Audiology via ola Kala on Inoreader http://ift.tt/1WFxvD5
via IFTTT

A Randomized Control Trial: Supplementing Hearing Aid Use with Listening and Communication Enhancement (LACE) Auditory Training.

Objective: To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. Design: A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling-the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. Results: No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Conclusions: Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone. Potential benefits of AT may be different than those assessed by the performance and self-report measures utilized here. Individual differences not assessed in this study should be examined to evaluate whether AT with LACE has any benefits for particular individuals. Clinically, these findings suggest that audiologists may want to temper the expectations of their patients who embark on LACE training. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/1Qzd6Bk
via IFTTT

White Noise For Tinnitus


If you are someone who has suffered with tinnitus, you know all too well how irritating it can be. Most people describe tinnitus as a ringing sound heard in the ears, but other sounds such as a roaring or rushing sound that is like what you may hear at the ocean, clicking, hissing, or even unclear voices can be heard. These sounds are heard when there are no external factors around to produce them.

Tinnitus can be difficult to diagnose. Many people suffer for years before seeking help, and even then, diagnosis can be difficult. A sufferer will make an appointment with his/her healthcare provider, have a physical exam that will include the examiner looking at the ears, but finding nothing abnormal. That person may receive medication therapy. Of course the medication will not be effective because it is a medicine that is not intended for tinnitus. So, the sufferer is left with continued struggling and frustration. It is very difficult to get others who do not know what it is like to hear sounds when no external factors are around to make those sounds.

Oftentimes, sufferers are left feeling alone, left out, and misunderstood. Well-meaning healthcare professionals have on many occasions mistaken a person who suffers from tinnitus as being depressed or anxious and prescribed antidepressants, which can actually worsen tinnitus. This misdiagnosis and medication error can happen for those who deal with debilitating tinnitus.

What Causes Tinnitus?
Tinnitus can be caused by a number of health conditions, however, in many instances a direct cause may not be found.
• Age-related hearing loss.
• Loud noises.
• Structural changes to the bone in the middle ear.
• Earwax buildup.
• Meniere’s disease.
• Temporomandibular joint disorder.
• Blood vessel problems.

White Noise For Tinnitus Treatment
White noise is an effective treatment for tinnitus. Though it serves to help lessen the troublesome symptoms, white noise for tinnitus is not a cure for tinnitus. White noise is best described as the complete spectrum of sounds heard by human ears combined together for a continuous sound. Think of white noise as the sound that is produced by static or the sound that your television makes when a station is off.

White noise is also referred to as sound therapy. This form of treatment for tinnitus creates a mask against the agitating sounds that are produced when an attack occurs, thereby reducing the perception of background noises. Devices such as MP3 players, CD players, and speakers in the bedroom placed under your pillows to help you sleep more soundly.

There are many on-line sites and application software vendors who offer white noise therapy for tinnitus. However, it is highly recommended that you see a qualified audiologist. An audiologist is a healthcare professional who specializes in identifying, diagnosing, and treating disorders of the auditory and vestibular parts of the ear. Your audiologist will determine the proper frequency for your tinnitus and program your generator for your specific needs.

If you or someone close to you suffers from ringing in the ears or other auditory disturbances, white noise for tinnitus has been shown to be a very effective treatment.




from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Orcoyk
via IFTTT

Cortical Reorganisation during a 30-Week Tinnitus Treatment Program

by Catherine M. McMahon, Ronny K. Ibrahim, Ankit Mathur

Subjective tinnitus is characterised by the conscious perception of a phantom sound. Previous studies have shown that individuals with chronic tinnitus have disrupted sound-evoked cortical tonotopic maps, time-shifted evoked auditory responses, and altered oscillatory cortical activity. The main objectives of this study were to: (i) compare sound-evoked brain responses and cortical tonotopic maps in individuals with bilateral tinnitus and those without tinnitus; and (ii) investigate whether changes in these sound-evoked responses occur with amelioration of the tinnitus percept during a 30-week tinnitus treatment program. Magnetoencephalography (MEG) recordings of 12 bilateral tinnitus participants and 10 control normal-hearing subjects reporting no tinnitus were recorded at baseline, using 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tones presented monaurally at 70 dBSPL through insert tube phones. For the tinnitus participants, MEG recordings were obtained at 5-, 10-, 20- and 30- week time points during tinnitus treatment. Results for the 500 Hz and 1000 Hz sources (where hearing thresholds were within normal limits for all participants) showed that the tinnitus participants had a significantly larger and more anteriorly located source strengths when compared to the non-tinnitus participants. During the 30-week tinnitus treatment, the participants’ 500 Hz and 1000 Hz source strengths remained higher than the non-tinnitus participants; however, the source locations shifted towards the direction recorded from the non-tinnitus control group. Further, in the left hemisphere, there was a time-shifted association between the trajectory of change of the individual’s objective (source strength and anterior-posterior source location) and subjective measures (using tinnitus reaction questionnaire, TRQ). The differences in source strength between the two groups suggest that individuals with tinnitus have enhanced central gain which is not significantly influenced by the tinnitus treatment, and may result from the hearing loss per se. On the other hand, the shifts in the tonotopic map towards the non-tinnitus participants’ source location suggests that the tinnitus treatment might reduce the disruptions in the map, presumably produced by the tinnitus percept directly or indirectly. Further, the similarity in the trajectory of change across the objective and subjective parameters after time-shifting the perceptual changes by 5 weeks suggests that during or following treatment, perceptual changes in the tinnitus percept may precede neurophysiological changes. Subgroup analyses conducted by magnitude of hearing loss suggest that there were no differences in the 500 Hz and 1000 Hz source strength amplitudes for the mild-moderate compared with the mild-severe hearing loss subgroup, although the mean source strength was consistently higher for the mild-severe subgroup. Further, the mild-severe subgroup had 500 Hz and 1000 Hz source locations located more anteriorly (i.e., more disrupted compared to the control group) compared to the mild-moderate group, although this was trending towards significance only for the 500Hz left hemisphere source. While the small numbers of participants within the subgroup analyses reduce the statistical power, this study suggests that those with greater magnitudes of hearing loss show greater cortical disruptions with tinnitus and that tinnitus treatment appears to reduce the tonotopic map disruptions but not the source strength (or central gain).

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1UiCxa2
via IFTTT

Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech

Purpose
Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech.
Method
We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics).
Results
We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech.
Conclusions
Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/21apeOi
via IFTTT

Markers, Models, and Measurement Error: Exploring the Links Between Attention Deficits and Language Impairments

Purpose
The empirical record regarding the expected co-occurrence of attention-deficit/hyperactivity disorder (ADHD) and specific language impairment is confusing and contradictory. A research plan is presented that has the potential to untangle links between these 2 common neurodevelopmental disorders.
Method
Data from completed and ongoing research projects examining the relative value of different clinical markers for separating cases of specific language impairment from ADHD are presented.
Results
The best option for measuring core language impairments in a manner that does not potentially penalize individuals with ADHD is to focus assessment on key grammatical and verbal memory skills. Likewise, assessment of ADHD symptoms through standardized informant rating scales is optimized when they are adjusted for overlapping language and academic symptoms.
Conclusion
As a collection, these clinical metrics set the stage for further examination of potential linkages between attention deficits and language impairments.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XIO2qY
via IFTTT

Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech

Purpose
Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech.
Method
We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics).
Results
We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech.
Conclusions
Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.

from #Audiology via ola Kala on Inoreader http://ift.tt/21apeOi
via IFTTT

Markers, Models, and Measurement Error: Exploring the Links Between Attention Deficits and Language Impairments

Purpose
The empirical record regarding the expected co-occurrence of attention-deficit/hyperactivity disorder (ADHD) and specific language impairment is confusing and contradictory. A research plan is presented that has the potential to untangle links between these 2 common neurodevelopmental disorders.
Method
Data from completed and ongoing research projects examining the relative value of different clinical markers for separating cases of specific language impairment from ADHD are presented.
Results
The best option for measuring core language impairments in a manner that does not potentially penalize individuals with ADHD is to focus assessment on key grammatical and verbal memory skills. Likewise, assessment of ADHD symptoms through standardized informant rating scales is optimized when they are adjusted for overlapping language and academic symptoms.
Conclusion
As a collection, these clinical metrics set the stage for further examination of potential linkages between attention deficits and language impairments.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XIO2qY
via IFTTT

Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech

Purpose
Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech.
Method
We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics).
Results
We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech.
Conclusions
Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.

from #Audiology via ola Kala on Inoreader http://ift.tt/21apeOi
via IFTTT

Markers, Models, and Measurement Error: Exploring the Links Between Attention Deficits and Language Impairments

Purpose
The empirical record regarding the expected co-occurrence of attention-deficit/hyperactivity disorder (ADHD) and specific language impairment is confusing and contradictory. A research plan is presented that has the potential to untangle links between these 2 common neurodevelopmental disorders.
Method
Data from completed and ongoing research projects examining the relative value of different clinical markers for separating cases of specific language impairment from ADHD are presented.
Results
The best option for measuring core language impairments in a manner that does not potentially penalize individuals with ADHD is to focus assessment on key grammatical and verbal memory skills. Likewise, assessment of ADHD symptoms through standardized informant rating scales is optimized when they are adjusted for overlapping language and academic symptoms.
Conclusion
As a collection, these clinical metrics set the stage for further examination of potential linkages between attention deficits and language impairments.

from #Audiology via ola Kala on Inoreader http://ift.tt/1XIO2qY
via IFTTT

Effects of Levodopa on Postural Strategies in Parkinson's disease

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Chiara Baston, Martina Mancini, Laura Rocchi, Fay Horak
Altered postural control and balance are major disabling issues of Parkinson's disease (PD). Static and dynamic posturography have provided insight into PD's postural deficits; however, little is known about impairments in postural coordination. We hypothesized that subjects with PD would show more ankle strategy during quiet stance than healthy control subjects, who would include some hip strategy, and this stiffer postural strategy would increase with disease progression.We quantified postural strategy and sway dispersion with inertial sensors (one placed on the shank and one on the posterior trunk at L5 level) while subjects were standing still with their eyes open. A total of 70 subjects with PD, including a mild group (H&Y≤2, N=33) and a more severe group (H&Y≥3, N=37), were assessed while OFF and while ON levodopa medication. We also included a healthy control group (N=21).Results showed an overall preference of ankle strategy in all groups while maintaining balance. Postural strategy was significantly lower ON compared to OFF medication (indicating more hip strategy), but no effect of disease stage was found. Instead, sway dispersion was significantly larger in ON compared to OFF medication, and significantly larger in the more severe PD group compared to the mild. In addition, increased hip strategy during stance was associated with poorer self-perception of balance.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1WDhCx2
via IFTTT

Instantaneous progression reference frame for calculating pelvis rotations: Reliable and anatomically-meaningful results independent of the direction of movement

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Hans Kainz, David G. Lloyd, Henry P.J. Walsh, Christopher P. Carty
In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5 degrees, differences in pelvis angles at pre-defined gait events below 10 degrees). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30 degrees, differences in pelvis angles at pre-defined gait events up to 45 degrees). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1oXd2zd
via IFTTT

Effects of Levodopa on Postural Strategies in Parkinson's disease

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Chiara Baston, Martina Mancini, Laura Rocchi, Fay Horak
Altered postural control and balance are major disabling issues of Parkinson's disease (PD). Static and dynamic posturography have provided insight into PD's postural deficits; however, little is known about impairments in postural coordination. We hypothesized that subjects with PD would show more ankle strategy during quiet stance than healthy control subjects, who would include some hip strategy, and this stiffer postural strategy would increase with disease progression.We quantified postural strategy and sway dispersion with inertial sensors (one placed on the shank and one on the posterior trunk at L5 level) while subjects were standing still with their eyes open. A total of 70 subjects with PD, including a mild group (H&Y≤2, N=33) and a more severe group (H&Y≥3, N=37), were assessed while OFF and while ON levodopa medication. We also included a healthy control group (N=21).Results showed an overall preference of ankle strategy in all groups while maintaining balance. Postural strategy was significantly lower ON compared to OFF medication (indicating more hip strategy), but no effect of disease stage was found. Instead, sway dispersion was significantly larger in ON compared to OFF medication, and significantly larger in the more severe PD group compared to the mild. In addition, increased hip strategy during stance was associated with poorer self-perception of balance.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WDhCx2
via IFTTT

Instantaneous progression reference frame for calculating pelvis rotations: Reliable and anatomically-meaningful results independent of the direction of movement

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Hans Kainz, David G. Lloyd, Henry P.J. Walsh, Christopher P. Carty
In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5 degrees, differences in pelvis angles at pre-defined gait events below 10 degrees). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30 degrees, differences in pelvis angles at pre-defined gait events up to 45 degrees). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame.



from #Audiology via ola Kala on Inoreader http://ift.tt/1oXd2zd
via IFTTT

Loading rate increases during barefoot running in habitually shod runners: individual responses to an unfamiliar condition

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Nicholas Tam, Janie L. Astephen Wilson, Devon R. Coetzee, Leanri van Pletsen, Ross Tucker
The purpose of this study was to examine the effect of barefoot running on initial loading rate (LR), lower extremity joint kinematics and kinetics, and neuromuscular control in habitually shod runners with an emphasis on the individual response to this unfamiliar condition.Kinematics, muscle activity and ground reaction force data were collected from 51 habitually shod runners during overground running in a barefoot and shod condition. Joint kinetics and stiffness were calculated with inverse dynamics. Inter-individual initial LR variability was explored by separating individuals by a barefoot/shod ratio to determine acute responders/non-responders.Mean initial LR was 54.1% greater in the barefoot when compared to the shod condition. Differences between acute responders/non-responders were found at peak and initial contact sagittal ankle angle and at initial ground contact. Correlations were found between barefoot sagittal ankle angle at initial ground contact and barefoot initial LR.A large variability in biomechanical responses to an acute exposure to barefoot running was found. A large intra-individual variability was found in initial LR but not ankle plantar-dorsiflexion between footwear conditions. A majority of habitually shod runners do not exhibit previously reported benefits in terms of reduced initial LRs when barefoot, even though acute increase in gastrocnemius and decrease in tibialis anterior activity and biomechanical differences between conditions were found. Lastly, runners who increased LR when barefoot reduced LRs when wearing shoes to levels similar seen in habitually barefoot runners who do adopt a forefoot-landing pattern, despite increased dorsiflexion.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WDhCgo
via IFTTT

Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Maria Joana D. Caetano, Stephen R. Lord, Daniel Schoene, Paulo H.S. Pelicioni, Daina L. Sturnieks, Jasmine C. Menant
BackgroundA large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group.ObjectiveTo evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead.MethodsFifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each).ResultsCompared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition.ConclusionCompared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards.



from #Audiology via ola Kala on Inoreader http://ift.tt/1oXd3D6
via IFTTT

Effects of Levodopa on Postural Strategies in Parkinson's disease

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Chiara Baston, Martina Mancini, Laura Rocchi, Fay Horak
Altered postural control and balance are major disabling issues of Parkinson's disease (PD). Static and dynamic posturography have provided insight into PD's postural deficits; however, little is known about impairments in postural coordination. We hypothesized that subjects with PD would show more ankle strategy during quiet stance than healthy control subjects, who would include some hip strategy, and this stiffer postural strategy would increase with disease progression.We quantified postural strategy and sway dispersion with inertial sensors (one placed on the shank and one on the posterior trunk at L5 level) while subjects were standing still with their eyes open. A total of 70 subjects with PD, including a mild group (H&Y≤2, N=33) and a more severe group (H&Y≥3, N=37), were assessed while OFF and while ON levodopa medication. We also included a healthy control group (N=21).Results showed an overall preference of ankle strategy in all groups while maintaining balance. Postural strategy was significantly lower ON compared to OFF medication (indicating more hip strategy), but no effect of disease stage was found. Instead, sway dispersion was significantly larger in ON compared to OFF medication, and significantly larger in the more severe PD group compared to the mild. In addition, increased hip strategy during stance was associated with poorer self-perception of balance.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WDhCx2
via IFTTT

Instantaneous progression reference frame for calculating pelvis rotations: Reliable and anatomically-meaningful results independent of the direction of movement

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Hans Kainz, David G. Lloyd, Henry P.J. Walsh, Christopher P. Carty
In motion analysis, pelvis angles are conventionally calculated as the rotations between the pelvis and laboratory reference frame. This approach assumes that the participant's motion is along the anterior-posterior laboratory reference frame axis. When this assumption is violated interpretation of pelvis angels become problematic. In this paper a new approach for calculating pelvis angles based on the rotations between the pelvis and an instantaneous progression reference frame was introduced. At every time-point, the tangent to the trajectory of the midpoint of the pelvis projected into the horizontal plane of the laboratory reference frame was used to define the anterior-posterior axis of the instantaneous progression reference frame. This new approach combined with the rotation-obliquity-tilt rotation sequence was compared to the conventional approach using the rotation-obliquity-tilt and tilt-obliquity-rotation sequences. Four different movement tasks performed by eight healthy adults were analysed. The instantaneous progression reference frame approach was the only approach that showed reliable and anatomically meaningful results for all analysed movement tasks (mean root-mean-square-differences below 5 degrees, differences in pelvis angles at pre-defined gait events below 10 degrees). Both rotation sequences combined with the conventional approach led to unreliable results as soon as the participant's motion was not along the anterior-posterior laboratory axis (mean root-mean-square-differences up to 30 degrees, differences in pelvis angles at pre-defined gait events up to 45 degrees). The instantaneous progression reference frame approach enables the gait analysis community to analysis pelvis angles for movements that do not follow the anterior-posterior axis of the laboratory reference frame.



from #Audiology via ola Kala on Inoreader http://ift.tt/1oXd2zd
via IFTTT

Loading rate increases during barefoot running in habitually shod runners: individual responses to an unfamiliar condition

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Nicholas Tam, Janie L. Astephen Wilson, Devon R. Coetzee, Leanri van Pletsen, Ross Tucker
The purpose of this study was to examine the effect of barefoot running on initial loading rate (LR), lower extremity joint kinematics and kinetics, and neuromuscular control in habitually shod runners with an emphasis on the individual response to this unfamiliar condition.Kinematics, muscle activity and ground reaction force data were collected from 51 habitually shod runners during overground running in a barefoot and shod condition. Joint kinetics and stiffness were calculated with inverse dynamics. Inter-individual initial LR variability was explored by separating individuals by a barefoot/shod ratio to determine acute responders/non-responders.Mean initial LR was 54.1% greater in the barefoot when compared to the shod condition. Differences between acute responders/non-responders were found at peak and initial contact sagittal ankle angle and at initial ground contact. Correlations were found between barefoot sagittal ankle angle at initial ground contact and barefoot initial LR.A large variability in biomechanical responses to an acute exposure to barefoot running was found. A large intra-individual variability was found in initial LR but not ankle plantar-dorsiflexion between footwear conditions. A majority of habitually shod runners do not exhibit previously reported benefits in terms of reduced initial LRs when barefoot, even though acute increase in gastrocnemius and decrease in tibialis anterior activity and biomechanical differences between conditions were found. Lastly, runners who increased LR when barefoot reduced LRs when wearing shoes to levels similar seen in habitually barefoot runners who do adopt a forefoot-landing pattern, despite increased dorsiflexion.



from #Audiology via ola Kala on Inoreader http://ift.tt/1WDhCgo
via IFTTT

Loading rate increases during barefoot running in habitually shod runners: individual responses to an unfamiliar condition

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Nicholas Tam, Janie L. Astephen Wilson, Devon R. Coetzee, Leanri van Pletsen, Ross Tucker
The purpose of this study was to examine the effect of barefoot running on initial loading rate (LR), lower extremity joint kinematics and kinetics, and neuromuscular control in habitually shod runners with an emphasis on the individual response to this unfamiliar condition.Kinematics, muscle activity and ground reaction force data were collected from 51 habitually shod runners during overground running in a barefoot and shod condition. Joint kinetics and stiffness were calculated with inverse dynamics. Inter-individual initial LR variability was explored by separating individuals by a barefoot/shod ratio to determine acute responders/non-responders.Mean initial LR was 54.1% greater in the barefoot when compared to the shod condition. Differences between acute responders/non-responders were found at peak and initial contact sagittal ankle angle and at initial ground contact. Correlations were found between barefoot sagittal ankle angle at initial ground contact and barefoot initial LR.A large variability in biomechanical responses to an acute exposure to barefoot running was found. A large intra-individual variability was found in initial LR but not ankle plantar-dorsiflexion between footwear conditions. A majority of habitually shod runners do not exhibit previously reported benefits in terms of reduced initial LRs when barefoot, even though acute increase in gastrocnemius and decrease in tibialis anterior activity and biomechanical differences between conditions were found. Lastly, runners who increased LR when barefoot reduced LRs when wearing shoes to levels similar seen in habitually barefoot runners who do adopt a forefoot-landing pattern, despite increased dorsiflexion.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1WDhCgo
via IFTTT

Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Maria Joana D. Caetano, Stephen R. Lord, Daniel Schoene, Paulo H.S. Pelicioni, Daina L. Sturnieks, Jasmine C. Menant
BackgroundA large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group.ObjectiveTo evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead.MethodsFifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each).ResultsCompared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition.ConclusionCompared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1oXd3D6
via IFTTT

Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets

Publication date: Available online 22 February 2016
Source:Gait & Posture
Author(s): Maria Joana D. Caetano, Stephen R. Lord, Daniel Schoene, Paulo H.S. Pelicioni, Daina L. Sturnieks, Jasmine C. Menant
BackgroundA large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group.ObjectiveTo evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead.MethodsFifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each).ResultsCompared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition.ConclusionCompared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards.



from #Audiology via ola Kala on Inoreader http://ift.tt/1oXd3D6
via IFTTT

How do families of children with Down syndrome perceive speech intelligibility in Turkey?

http:--images.hindawi.com-linkout-hindaw http:--http://ift.tt/1Fkw4zC Related Articles

How do families of children with Down syndrome perceive speech intelligibility in Turkey?

Biomed Res Int. 2015;2015:707134

Authors: Toğram B

Abstract
Childhood verbal apraxia has not been identified or treated sufficiently in children with Down syndrome but recent research has documented that symptoms of childhood verbal apraxia can be found in children with Down syndrome. But, it is not routinely diagnosed in this population. There is neither an assessment tool in Turkish nor any research on childhood verbal apraxia although there is a demand not only for children with Down syndrome but also for normally developing children. The study examined if it was possible to determine oral-motor difficulties and childhood verbal apraxia features in children with Down syndrome through a survey. The survey was a parental report measure. There were 329 surveys received. Results indicated that only 5.6% of children with Down syndrome were diagnosed with apraxia, even though many of the subject children displayed clinical features of childhood verbal apraxia. The most frequently reported symptoms of childhood verbal apraxia in literature were displayed by the children with Down syndrome in the study. Parents could identify childhood verbal apraxia symptoms using parent survey. This finding suggests that the survey can be developed that could serve as a screening tool for a possible childhood verbal apraxia diagnosis in Turkey.

PMID: 25977925 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QURzxp
via IFTTT

Can place-specific cochlear dispersion be represented by auditory steady-state responses?

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Andreu Paredes Gallardo, Bastian Epp, Torsten Dau
The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large “within-channel” synchrony of activity. This, in turn, leads to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by “across-channel” synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Qbe5pj
via IFTTT

Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Matthew M. Wallace, James A. Harris, Donald Q. Brubaker, Caitlyn A. Klotz, Mark L. Gabriele
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VyL3jG
via IFTTT

Can place-specific cochlear dispersion be represented by auditory steady-state responses?

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Andreu Paredes Gallardo, Bastian Epp, Torsten Dau
The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large “within-channel” synchrony of activity. This, in turn, leads to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by “across-channel” synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Qbe5pj
via IFTTT

Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Matthew M. Wallace, James A. Harris, Donald Q. Brubaker, Caitlyn A. Klotz, Mark L. Gabriele
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VyL3jG
via IFTTT

Can place-specific cochlear dispersion be represented by auditory steady-state responses?

S03785955.gif

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Andreu Paredes Gallardo, Bastian Epp, Torsten Dau
The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large “within-channel” synchrony of activity. This, in turn, leads to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by “across-channel” synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Qbe5pj
via IFTTT

Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem

S03785955.gif

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Matthew M. Wallace, James A. Harris, Donald Q. Brubaker, Caitlyn A. Klotz, Mark L. Gabriele
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1VyL3jG
via IFTTT

Can place-specific cochlear dispersion be represented by auditory steady-state responses?

S03785955.gif

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Andreu Paredes Gallardo, Bastian Epp, Torsten Dau
The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large “within-channel” synchrony of activity. This, in turn, leads to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by “across-channel” synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Qbe5pj
via IFTTT

Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem

S03785955.gif

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Matthew M. Wallace, James A. Harris, Donald Q. Brubaker, Caitlyn A. Klotz, Mark L. Gabriele
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VyL3jG
via IFTTT

Can place-specific cochlear dispersion be represented by auditory steady-state responses?

S03785955.gif

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Andreu Paredes Gallardo, Bastian Epp, Torsten Dau
The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated basilar membrane (BM) response at that BM position, due to the large “within-channel” synchrony of activity. This, in turn, leads to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases, also the temporally reversed versions of the stimuli were considered. The ASSRs obtained with the Schroeder tone complexes were found to be dominated by “across-channel” synchrony and, thus, do not reflect local place-specific information. In the case of the more frequency-specific stimuli, no significant differences were found between the responses to the IR and its temporally reversed counterpart. Thus, whereas ASSRs to narrowband stimuli have been used as an objective indicator of frequency-specific hearing sensitivity, the method does not seem to be sensitive enough to reflect local cochlear dispersion.



from #Audiology via ola Kala on Inoreader http://ift.tt/1Qbe5pj
via IFTTT

Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem

S03785955.gif

Publication date: Available online 21 February 2016
Source:Hearing Research
Author(s): Matthew M. Wallace, James A. Harris, Donald Q. Brubaker, Caitlyn A. Klotz, Mark L. Gabriele
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience.



from #Audiology via ola Kala on Inoreader http://ift.tt/1VyL3jG
via IFTTT