Σάββατο 30 Σεπτεμβρίου 2017

Video Roundup: Celebrities Talk About their Tinnitus

Even though tinnitus sufferers often feel alone, tinnitus is a remarkably common symptom. Below, we’ve rounded up a number of video interviews about celebrities talking about their tinnitus:

Here’s Chris Martin of the band Coldplay talking about his tinnitus:

Below, actor William Shatner talks about his experience with developing tinnitus in the film industry:

Here’s a link to a video of Will.i.am of the Black Eyed Peas talking about his tinnitus.

Here’s a link to a video of Barbara Streisand talking about her tinnitus.

And finally, here’s a video of Ryan Adams talking about his tinnitus (in the context of Meniere’s Disease).



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2x61m0N
via IFTTT

Hip movement pathomechanics of patients with hip osteoarthritis aim at reducing hip joint loading on the osteoarthritic side

alertIcon.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): A.G. Meyer Christophe, Mariska Wesseling, Kristoff Corten, Angela Nieuwenhuys, Davide Monari, Jean-Pierre Simon, Ilse Jonkers, Kaat Desloovere
This study aims at defining gait pathomechanics in patients with hip osteoarthritis (OA) and their effect on hip joint loading by combining analyses of hip kinematics, kinetics and contact forces during gait. Twenty patients with hip OA and 17 healthy volunteers matched for age and BMI performed three-dimensional gait analysis. Hip OA level was evaluated based on plane radiographs using the Tönnis classification. Hip joint kinematics, kinetics as well as hip contact forces were calculated. Waveforms were time normalized and compared between groups using statistical parametric mapping analysis. Patients walked with reduced hip adduction angle and reduced hip abduction and external rotation moments. The work generated by the hip abductors during the stance phase of gait was largely decreased. These changes resulted in a decrease and a more vertical and anterior orientation of the hip contact forces compared to healthy controls. This study documents alterations in hip kinematics and kinetics resulting in decreased hip loading in patients with hip OA. The results suggested that patients altered their gait to increase medio-lateral stability, thereby decreasing demand on the hip abductors. These findings support discharge of abductor muscles that may bear clinical relevance of tailored rehabilitation targeting hip abductor muscles strengthening and gait retraining.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fZxGNh
via IFTTT

Hip movement pathomechanics of patients with hip osteoarthritis aim at reducing hip joint loading on the osteoarthritic side

alertIcon.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): A.G. Meyer Christophe, Mariska Wesseling, Kristoff Corten, Angela Nieuwenhuys, Davide Monari, Jean-Pierre Simon, Ilse Jonkers, Kaat Desloovere
This study aims at defining gait pathomechanics in patients with hip osteoarthritis (OA) and their effect on hip joint loading by combining analyses of hip kinematics, kinetics and contact forces during gait. Twenty patients with hip OA and 17 healthy volunteers matched for age and BMI performed three-dimensional gait analysis. Hip OA level was evaluated based on plane radiographs using the Tönnis classification. Hip joint kinematics, kinetics as well as hip contact forces were calculated. Waveforms were time normalized and compared between groups using statistical parametric mapping analysis. Patients walked with reduced hip adduction angle and reduced hip abduction and external rotation moments. The work generated by the hip abductors during the stance phase of gait was largely decreased. These changes resulted in a decrease and a more vertical and anterior orientation of the hip contact forces compared to healthy controls. This study documents alterations in hip kinematics and kinetics resulting in decreased hip loading in patients with hip OA. The results suggested that patients altered their gait to increase medio-lateral stability, thereby decreasing demand on the hip abductors. These findings support discharge of abductor muscles that may bear clinical relevance of tailored rehabilitation targeting hip abductor muscles strengthening and gait retraining.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2fZxGNh
via IFTTT

Hip movement pathomechanics of patients with hip osteoarthritis aim at reducing hip joint loading on the osteoarthritic side

alertIcon.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): A.G. Meyer Christophe, Mariska Wesseling, Kristoff Corten, Angela Nieuwenhuys, Davide Monari, Jean-Pierre Simon, Ilse Jonkers, Kaat Desloovere
This study aims at defining gait pathomechanics in patients with hip osteoarthritis (OA) and their effect on hip joint loading by combining analyses of hip kinematics, kinetics and contact forces during gait. Twenty patients with hip OA and 17 healthy volunteers matched for age and BMI performed three-dimensional gait analysis. Hip OA level was evaluated based on plane radiographs using the Tönnis classification. Hip joint kinematics, kinetics as well as hip contact forces were calculated. Waveforms were time normalized and compared between groups using statistical parametric mapping analysis. Patients walked with reduced hip adduction angle and reduced hip abduction and external rotation moments. The work generated by the hip abductors during the stance phase of gait was largely decreased. These changes resulted in a decrease and a more vertical and anterior orientation of the hip contact forces compared to healthy controls. This study documents alterations in hip kinematics and kinetics resulting in decreased hip loading in patients with hip OA. The results suggested that patients altered their gait to increase medio-lateral stability, thereby decreasing demand on the hip abductors. These findings support discharge of abductor muscles that may bear clinical relevance of tailored rehabilitation targeting hip abductor muscles strengthening and gait retraining.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fZxGNh
via IFTTT

Παρασκευή 29 Σεπτεμβρίου 2017

Hearing Aids for Mild-to-Moderate Hearing Loss in Adults

A recent systematic review concluded that hearing aid use in older adults with a mild-to-moderate hearing loss was beneficial in improving everyday situations, general health-related quality of life and improve listening ability with little evidence of harm.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2fDzezA
via IFTTT

Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI)

S09666362.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Michelle A. Diaz, Mandi W. Gibbons, Jinsup Song, Howard J. Hillstrom, Kersti H. Choe, Maria R. Pasquale
Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits—reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R2=0.99; P<0.001). Bland-Altman analysis demonstrated a difference of 0.55% between CPEI computation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xCBnlZ
via IFTTT

A “HOLTER” for Parkinson’s disease: Validation of the ability to detect on-off states using the REMPARK system

S09666362.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Àngels Bayés, Albert Samá, Anna Prats, Carlos Pérez-López, Maricruz Crespo-Maraver, Juan Manuel Moreno, Sheila Alcaine, Alejandro Rodriguez-Molinero, Berta Mestre, Paola Quispe, Ana Correia de Barros, Rui Castro, Alberto Costa, Roberta Annicchiarico, Patrick Browne, Tim Counihan, Hadas Lewy, Gabriel Vainstein, Leo R. Quinlan, Dean Sweeney, Gearóid ÓLaighin, Jordi Rovira, Daniel Rodrigue z-Martin, Joan Cabestany
The treatment of Parkinson's disease (PD) with levodopa is very effective. However, over time, motor complications (MCs) appear, restricting the patient from leading a normal life. One of the most disabling MCs is ON-OFF fluctuations. Gathering accurate information about the clinical status of the patient is essential for planning treatment and assessing its effect. Systems such as the REMPARK system, capable of accurately and reliably monitoring ON-OFF fluctuations, are of great interest.ObjectiveTo analyze the ability of the REMPARK System to detect ON-OFF fluctuations.MethodsForty-one patients with moderate to severe idiopathic PD were recruited according to the UK Parkinson’s Disease Society Brain Bank criteria. Patients with motor fluctuations, freezing of gait and/or dyskinesia and who were able to walk unassisted in the OFF phase, were included in the study. Patients wore the REMPARK System for 3days and completed a diary of their motor state once every hour.ResultsThe record obtained by the REMPARK System, compared with patient-completed diaries, demonstrated 97% sensitivity in detecting OFF states and 88% specificity (i.e., accuracy in detecting ON states).ConclusionThe REMPARK System detects an accurate evaluation of ON-OFF fluctuations in PD; this technology paves the way for an optimisation of the symptomatic control of PD motor symptoms as well as an accurate assessment of medication efficacy.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xG5NBp
via IFTTT

Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI)

S09666362.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Michelle A. Diaz, Mandi W. Gibbons, Jinsup Song, Howard J. Hillstrom, Kersti H. Choe, Maria R. Pasquale
Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits—reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R2=0.99; P<0.001). Bland-Altman analysis demonstrated a difference of 0.55% between CPEI computation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xCBnlZ
via IFTTT

A “HOLTER” for Parkinson’s disease: Validation of the ability to detect on-off states using the REMPARK system

S09666362.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Àngels Bayés, Albert Samá, Anna Prats, Carlos Pérez-López, Maricruz Crespo-Maraver, Juan Manuel Moreno, Sheila Alcaine, Alejandro Rodriguez-Molinero, Berta Mestre, Paola Quispe, Ana Correia de Barros, Rui Castro, Alberto Costa, Roberta Annicchiarico, Patrick Browne, Tim Counihan, Hadas Lewy, Gabriel Vainstein, Leo R. Quinlan, Dean Sweeney, Gearóid ÓLaighin, Jordi Rovira, Daniel Rodrigue z-Martin, Joan Cabestany
The treatment of Parkinson's disease (PD) with levodopa is very effective. However, over time, motor complications (MCs) appear, restricting the patient from leading a normal life. One of the most disabling MCs is ON-OFF fluctuations. Gathering accurate information about the clinical status of the patient is essential for planning treatment and assessing its effect. Systems such as the REMPARK system, capable of accurately and reliably monitoring ON-OFF fluctuations, are of great interest.ObjectiveTo analyze the ability of the REMPARK System to detect ON-OFF fluctuations.MethodsForty-one patients with moderate to severe idiopathic PD were recruited according to the UK Parkinson’s Disease Society Brain Bank criteria. Patients with motor fluctuations, freezing of gait and/or dyskinesia and who were able to walk unassisted in the OFF phase, were included in the study. Patients wore the REMPARK System for 3days and completed a diary of their motor state once every hour.ResultsThe record obtained by the REMPARK System, compared with patient-completed diaries, demonstrated 97% sensitivity in detecting OFF states and 88% specificity (i.e., accuracy in detecting ON states).ConclusionThe REMPARK System detects an accurate evaluation of ON-OFF fluctuations in PD; this technology paves the way for an optimisation of the symptomatic control of PD motor symptoms as well as an accurate assessment of medication efficacy.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xG5NBp
via IFTTT

Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI)

S09666362.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Michelle A. Diaz, Mandi W. Gibbons, Jinsup Song, Howard J. Hillstrom, Kersti H. Choe, Maria R. Pasquale
Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits—reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R2=0.99; P<0.001). Bland-Altman analysis demonstrated a difference of 0.55% between CPEI computation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xCBnlZ
via IFTTT

A “HOLTER” for Parkinson’s disease: Validation of the ability to detect on-off states using the REMPARK system

S09666362.gif

Publication date: January 2018
Source:Gait & Posture, Volume 59
Author(s): Àngels Bayés, Albert Samá, Anna Prats, Carlos Pérez-López, Maricruz Crespo-Maraver, Juan Manuel Moreno, Sheila Alcaine, Alejandro Rodriguez-Molinero, Berta Mestre, Paola Quispe, Ana Correia de Barros, Rui Castro, Alberto Costa, Roberta Annicchiarico, Patrick Browne, Tim Counihan, Hadas Lewy, Gabriel Vainstein, Leo R. Quinlan, Dean Sweeney, Gearóid ÓLaighin, Jordi Rovira, Daniel Rodrigue z-Martin, Joan Cabestany
The treatment of Parkinson's disease (PD) with levodopa is very effective. However, over time, motor complications (MCs) appear, restricting the patient from leading a normal life. One of the most disabling MCs is ON-OFF fluctuations. Gathering accurate information about the clinical status of the patient is essential for planning treatment and assessing its effect. Systems such as the REMPARK system, capable of accurately and reliably monitoring ON-OFF fluctuations, are of great interest.ObjectiveTo analyze the ability of the REMPARK System to detect ON-OFF fluctuations.MethodsForty-one patients with moderate to severe idiopathic PD were recruited according to the UK Parkinson’s Disease Society Brain Bank criteria. Patients with motor fluctuations, freezing of gait and/or dyskinesia and who were able to walk unassisted in the OFF phase, were included in the study. Patients wore the REMPARK System for 3days and completed a diary of their motor state once every hour.ResultsThe record obtained by the REMPARK System, compared with patient-completed diaries, demonstrated 97% sensitivity in detecting OFF states and 88% specificity (i.e., accuracy in detecting ON states).ConclusionThe REMPARK System detects an accurate evaluation of ON-OFF fluctuations in PD; this technology paves the way for an optimisation of the symptomatic control of PD motor symptoms as well as an accurate assessment of medication efficacy.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xG5NBp
via IFTTT

Boys Town National Research Hospital: Past, Present, and Future



from #Audiology via ola Kala on Inoreader http://ift.tt/2yLa5rn
via IFTTT

Effects of Device on Video Head Impulse Test (vHIT) Gain



from #Audiology via ola Kala on Inoreader http://ift.tt/2ybGUkd
via IFTTT

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants



from #Audiology via ola Kala on Inoreader http://ift.tt/2yKIGWp
via IFTTT

Relationship of Grammatical Context on Children’s Recognition of s/z-Inflected Words



from #Audiology via ola Kala on Inoreader http://ift.tt/2ycHP3O
via IFTTT

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique



from #Audiology via ola Kala on Inoreader http://ift.tt/2yLPFP1
via IFTTT

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss



from #Audiology via ola Kala on Inoreader http://ift.tt/2ycD5uW
via IFTTT

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions



from #Audiology via ola Kala on Inoreader http://ift.tt/2yKQzLo
via IFTTT

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children



from #Audiology via ola Kala on Inoreader http://ift.tt/2ydjdYM
via IFTTT

JAAA CEU Program



from #Audiology via ola Kala on Inoreader http://ift.tt/2yKWfFe
via IFTTT

Boys Town National Research Hospital: Past, Present, and Future



from #Audiology via ola Kala on Inoreader http://ift.tt/2yLa5rn
via IFTTT

Effects of Device on Video Head Impulse Test (vHIT) Gain



from #Audiology via ola Kala on Inoreader http://ift.tt/2ybGUkd
via IFTTT

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants



from #Audiology via ola Kala on Inoreader http://ift.tt/2yKIGWp
via IFTTT

Relationship of Grammatical Context on Children’s Recognition of s/z-Inflected Words



from #Audiology via ola Kala on Inoreader http://ift.tt/2ycHP3O
via IFTTT

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique



from #Audiology via ola Kala on Inoreader http://ift.tt/2yLPFP1
via IFTTT

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss



from #Audiology via ola Kala on Inoreader http://ift.tt/2ycD5uW
via IFTTT

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions



from #Audiology via ola Kala on Inoreader http://ift.tt/2yKQzLo
via IFTTT

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children



from #Audiology via ola Kala on Inoreader http://ift.tt/2ydjdYM
via IFTTT

JAAA CEU Program



from #Audiology via ola Kala on Inoreader http://ift.tt/2yKWfFe
via IFTTT

Boys Town National Research Hospital: Past, Present, and Future



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yLa5rn
via IFTTT

Effects of Device on Video Head Impulse Test (vHIT) Gain



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ybGUkd
via IFTTT

Effect of Stimulus Polarity on Physiological Spread of Excitation in Cochlear Implants



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yKIGWp
via IFTTT

Relationship of Grammatical Context on Children’s Recognition of s/z-Inflected Words



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ycHP3O
via IFTTT

Listener Performance with a Novel Hearing Aid Frequency Lowering Technique



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yLPFP1
via IFTTT

Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ycD5uW
via IFTTT

Identifying Otosclerosis with Aural Acoustical Tests of Absorbance, Group Delay, Acoustic Reflex Threshold, and Otoacoustic Emissions



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yKQzLo
via IFTTT

Perceptual Implications of Level- and Frequency-Specific Deviations from Hearing Aid Prescription in Children



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ydjdYM
via IFTTT

JAAA CEU Program



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2yKWfFe
via IFTTT

Πέμπτη 28 Σεπτεμβρίου 2017

A nonsynonymous mutation in the WFS1 gene in a Finnish family with age-related hearing impairment

S03785955.gif

Publication date: Available online 28 September 2017
Source:Hearing Research
Author(s): Laura Kytövuori, Samuli Hannula, Elina Mäki-Torkko, Martti Sorri, Kari Majamaa
Wolfram syndrome (WS) is caused by recessive mutations in the Wolfram syndrome 1 (WFS1) gene. Sensorineural hearing impairment (HI) is a frequent feature in WS and, furthermore, certain mutations in WFS1 cause nonsyndromic dominantly inherited low-frequency sensorineural HI. These two phenotypes are clinically distinct indicating that WFS1 is a reasonable candidate for genetic studies in patients with other phenotypes of HI. Here we have investigated, whether the variation in WFS1 has a pathogenic role in age-related hearing impairment (ARHI). WFS1 gene was investigated in a population sample of 518 Finnish adults born in 1938–1949 and representing variable hearing phenotypes. Identified variants were evaluated with respect to pathogenic potential. A rare mutation predicted to be pathogenic was found in a family with many members with impaired hearing. Twenty members were recruited to a segregation study and a detailed clinical examination. Heterozygous p.Tyr528His variant segregated completely with late-onset HI in which hearing deteriorated first at high frequencies and progressed to mid and low frequencies later in life. We report the first mutation in the WFS1 gene causing late-onset HI with audiogram configurations typical for ARHI. Monogenic forms of ARHI are rare and our results add WFS1 to the short list of such genes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xIvQJL
via IFTTT

Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss

alertIcon.gif

Publication date: Available online 28 September 2017
Source:Hearing Research
Author(s): Tomasz Wolak, Katarzyna Cieśla, Artur Lorens, Krzysztof Kochanek, Monika Lewandowska, Mateusz Rusiniak, Agnieszka Pluta, Joanna Wójcik, Henryk Skarżyński
Although the tonotopic organisation of the human primary auditory cortex (PAC) has already been studied, the question how its responses are affected in sensorineural hearing loss remains open. Twenty six patients (aged 38.1 ± 9.1 years; 12 men) with symmetrical sloping sensorineural hearing loss (SNHL) and 32 age- and gender-matched controls (NH) participated in an fMRI study using a sparse protocol. The stimuli were binaural 8s complex tones with central frequencies of 400 HzCF, 800 HzCF, 1600 HzCF, 3200 HzCF, or 6400 HzCF, presented at 80 dB(C). In NH responses to all frequency ranges were found in bilateral auditory cortices. The outcomes of a winnermap approach, showing a relative arrangement of active frequency-specific areas, was in line with the existing literature and revealed a V-shape high-frequency gradient surrounding areas that responded to low frequencies in the auditory cortex. In SNHL frequency-specific auditory cortex responses were observed only for sounds from 400 HzCF to 1600 HzCF, due to the severe or profound hearing loss in higher frequency ranges. Using a stringent statistical threshold (p < 0.05; FWE) significant differences between NH and SNHL were only revealed for mid and high-frequency sounds. At a more lenient statistical threshold (p < 0.001, FDRc), however, the size of activation induced by 400 HzCF in PAC was found statistically larger in patients with a prelingual, as compared to a postlingual onset of hearing loss. In addition, this low-frequency range was more extensively represented in the auditory cortex when outcomes obtained in all patients were contrasted with those revealed in normal hearing individuals (although statistically significant only for the secondary auditory cortex). The outcomes of the study suggest preserved patterns of large-scale tonotopic organisation in SNHL which can be further refined following auditory experience, especially when the hearing loss occurs prelingually. SNHL can induce both enlargement and reduction of the extent of responses in the topically organized auditory cortex.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fUazUa
via IFTTT

A nonsynonymous mutation in the WFS1 gene in a Finnish family with age-related hearing impairment

S03785955.gif

Publication date: Available online 28 September 2017
Source:Hearing Research
Author(s): Laura Kytövuori, Samuli Hannula, Elina Mäki-Torkko, Martti Sorri, Kari Majamaa
Wolfram syndrome (WS) is caused by recessive mutations in the Wolfram syndrome 1 (WFS1) gene. Sensorineural hearing impairment (HI) is a frequent feature in WS and, furthermore, certain mutations in WFS1 cause nonsyndromic dominantly inherited low-frequency sensorineural HI. These two phenotypes are clinically distinct indicating that WFS1 is a reasonable candidate for genetic studies in patients with other phenotypes of HI. Here we have investigated, whether the variation in WFS1 has a pathogenic role in age-related hearing impairment (ARHI). WFS1 gene was investigated in a population sample of 518 Finnish adults born in 1938–1949 and representing variable hearing phenotypes. Identified variants were evaluated with respect to pathogenic potential. A rare mutation predicted to be pathogenic was found in a family with many members with impaired hearing. Twenty members were recruited to a segregation study and a detailed clinical examination. Heterozygous p.Tyr528His variant segregated completely with late-onset HI in which hearing deteriorated first at high frequencies and progressed to mid and low frequencies later in life. We report the first mutation in the WFS1 gene causing late-onset HI with audiogram configurations typical for ARHI. Monogenic forms of ARHI are rare and our results add WFS1 to the short list of such genes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xIvQJL
via IFTTT

Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss

alertIcon.gif

Publication date: Available online 28 September 2017
Source:Hearing Research
Author(s): Tomasz Wolak, Katarzyna Cieśla, Artur Lorens, Krzysztof Kochanek, Monika Lewandowska, Mateusz Rusiniak, Agnieszka Pluta, Joanna Wójcik, Henryk Skarżyński
Although the tonotopic organisation of the human primary auditory cortex (PAC) has already been studied, the question how its responses are affected in sensorineural hearing loss remains open. Twenty six patients (aged 38.1 ± 9.1 years; 12 men) with symmetrical sloping sensorineural hearing loss (SNHL) and 32 age- and gender-matched controls (NH) participated in an fMRI study using a sparse protocol. The stimuli were binaural 8s complex tones with central frequencies of 400 HzCF, 800 HzCF, 1600 HzCF, 3200 HzCF, or 6400 HzCF, presented at 80 dB(C). In NH responses to all frequency ranges were found in bilateral auditory cortices. The outcomes of a winnermap approach, showing a relative arrangement of active frequency-specific areas, was in line with the existing literature and revealed a V-shape high-frequency gradient surrounding areas that responded to low frequencies in the auditory cortex. In SNHL frequency-specific auditory cortex responses were observed only for sounds from 400 HzCF to 1600 HzCF, due to the severe or profound hearing loss in higher frequency ranges. Using a stringent statistical threshold (p < 0.05; FWE) significant differences between NH and SNHL were only revealed for mid and high-frequency sounds. At a more lenient statistical threshold (p < 0.001, FDRc), however, the size of activation induced by 400 HzCF in PAC was found statistically larger in patients with a prelingual, as compared to a postlingual onset of hearing loss. In addition, this low-frequency range was more extensively represented in the auditory cortex when outcomes obtained in all patients were contrasted with those revealed in normal hearing individuals (although statistically significant only for the secondary auditory cortex). The outcomes of the study suggest preserved patterns of large-scale tonotopic organisation in SNHL which can be further refined following auditory experience, especially when the hearing loss occurs prelingually. SNHL can induce both enlargement and reduction of the extent of responses in the topically organized auditory cortex.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fUazUa
via IFTTT

A nonsynonymous mutation in the WFS1 gene in a Finnish family with age-related hearing impairment

Publication date: Available online 28 September 2017
Source:Hearing Research
Author(s): Laura Kytövuori, Samuli Hannula, Elina Mäki-Torkko, Martti Sorri, Kari Majamaa
Wolfram syndrome (WS) is caused by recessive mutations in the Wolfram syndrome 1 (WFS1) gene. Sensorineural hearing impairment (HI) is a frequent feature in WS and, furthermore, certain mutations in WFS1 cause nonsyndromic dominantly inherited low-frequency sensorineural HI. These two phenotypes are clinically distinct indicating that WFS1 is a reasonable candidate for genetic studies in patients with other phenotypes of HI. Here we have investigated, whether the variation in WFS1 has a pathogenic role in age-related hearing impairment (ARHI). WFS1 gene was investigated in a population sample of 518 Finnish adults born in 1938–1949 and representing variable hearing phenotypes. Identified variants were evaluated with respect to pathogenic potential. A rare mutation predicted to be pathogenic was found in a family with many members with impaired hearing. Twenty members were recruited to a segregation study and a detailed clinical examination. Heterozygous p.Tyr528His variant segregated completely with late-onset HI in which hearing deteriorated first at high frequencies and progressed to mid and low frequencies later in life. We report the first mutation in the WFS1 gene causing late-onset HI with audiogram configurations typical for ARHI. Monogenic forms of ARHI are rare and our results add WFS1 to the short list of such genes.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xIvQJL
via IFTTT

Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss

Publication date: Available online 28 September 2017
Source:Hearing Research
Author(s): Tomasz Wolak, Katarzyna Cieśla, Artur Lorens, Krzysztof Kochanek, Monika Lewandowska, Mateusz Rusiniak, Agnieszka Pluta, Joanna Wójcik, Henryk Skarżyński
Although the tonotopic organisation of the human primary auditory cortex (PAC) has already been studied, the question how its responses are affected in sensorineural hearing loss remains open. Twenty six patients (aged 38.1 ± 9.1 years; 12 men) with symmetrical sloping sensorineural hearing loss (SNHL) and 32 age- and gender-matched controls (NH) participated in an fMRI study using a sparse protocol. The stimuli were binaural 8s complex tones with central frequencies of 400 HzCF, 800 HzCF, 1600 HzCF, 3200 HzCF, or 6400 HzCF, presented at 80 dB(C). In NH responses to all frequency ranges were found in bilateral auditory cortices. The outcomes of a winnermap approach, showing a relative arrangement of active frequency-specific areas, was in line with the existing literature and revealed a V-shape high-frequency gradient surrounding areas that responded to low frequencies in the auditory cortex. In SNHL frequency-specific auditory cortex responses were observed only for sounds from 400 HzCF to 1600 HzCF, due to the severe or profound hearing loss in higher frequency ranges. Using a stringent statistical threshold (p < 0.05; FWE) significant differences between NH and SNHL were only revealed for mid and high-frequency sounds. At a more lenient statistical threshold (p < 0.001, FDRc), however, the size of activation induced by 400 HzCF in PAC was found statistically larger in patients with a prelingual, as compared to a postlingual onset of hearing loss. In addition, this low-frequency range was more extensively represented in the auditory cortex when outcomes obtained in all patients were contrasted with those revealed in normal hearing individuals (although statistically significant only for the secondary auditory cortex). The outcomes of the study suggest preserved patterns of large-scale tonotopic organisation in SNHL which can be further refined following auditory experience, especially when the hearing loss occurs prelingually. SNHL can induce both enlargement and reduction of the extent of responses in the topically organized auditory cortex.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2fUazUa
via IFTTT

Τρίτη 26 Σεπτεμβρίου 2017

Tinnitus Retraining Therapy

What is Tinnitus Retraining Therapy?

Tinnitus retraining therapy is a form of therapy that is used to help people who suffer from chronic buzzing and ringing in the ear. There are two components of tinnitus therapy. Directive counseling is one of those components. A professional will teach a person about ways that they can ignore tinnitus.

Sound therapy is another important component. A person will wear a device behind their ear that makes noise. The noise will help one take their mind off of the tinnitus.

Steps Involved in Tinnitus Retraining Therapy

This therapy includes the following:

<ul><li>The professional will collect important information about the patient. This includes things such as daily living habits and patient health history.</li><li>The patient will be fitted with a device that will generate noise.</li><li>The patient will receive psychological counseling. The main goal of counseling is to teach a person how to ignore the noise. Stress management is often taught during a counseling session. Deep relaxation exercises may also be taught. These techniques will help eliminate anxiety. The brain will no longer perceive the tinnitus as a threat. That is why a person will be able to take their mind off of it.</li></ul>

The amount of time that a person will need therapy can vary. How well a person responds to the treatment is one of the factors that will affect how long the treatment will last. Keep in mind that there is no cure for tinnitus. However, many people notice that their symptoms are less frequent after they get treatment.

The Effectiveness of Tinnitus Retraining Therapy

There was a study done that involved patients who suffered from tinnitus. The subjects were divided into two groups. One of the groups received tinnitus masking while the other group received tinnitus retraining therapy, or TRT. The study lasted for 18 months. The subjects were given one of the treatments at 0,3,6, 9 and 18 months. All of the subjects in the study were military veterans.

The subjects were also asked questions about their tinnitus symptoms. Both of the groups noticed a significant decrease in their tinnitus symptoms. However, the subjects who received the TRT treatment noticed a more drastic improvement. People who suffered from severe tinnitus were the ones who received the most benefit from TRT. Patients who suffered from moderate tinnitus noticed improvement, but the results were not as drastic.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xGVnDw
via IFTTT

Δευτέρα 25 Σεπτεμβρίου 2017

Semantic and Phonological Encoding Times in Adults Who Stutter: Brain Electrophysiological Evidence

Purpose
Some psycholinguistic theories of stuttering propose that language production operates along a different time course in adults who stutter (AWS) versus typically fluent adults (TFA). However, behavioral evidence for such a difference has been mixed. Here, the time course of semantic and phonological encoding in picture naming was compared in AWS (n = 16) versus TFA (n = 16) by measuring 2 event-related potential (ERP) components: NoGo N200, an ERP index of response inhibition, and lateralized readiness potential, an ERP index of response preparation.
Method
Each trial required a semantic judgment about a picture in addition to a phonemic judgment about the target label of the picture. Judgments were mapped onto a dual-choice (Go–NoGo/left–right) push-button response paradigm. On each trial, ERP activity time-locked to picture onset was recorded at 32 scalp electrodes.
Results
NoGo N200 was detected earlier to semantic NoGo trials than to phonemic NoGo trials in both groups, replicating previous evidence that semantic encoding generally precedes phonological encoding in language production. Moreover, N200 onset was earlier to semantic NoGo trials in TFA than in AWS, indicating that semantic information triggering response inhibition became available earlier in TFA versus AWS. In contrast, the time course of N200 activity to phonemic NoGo trials did not differ between groups. Lateralized readiness potential activity was influenced by strategic response preparation and, thus, could not be used to index real-time semantic and phonological encoding.
Conclusion
NoGo N200 results point to slowed semantic encoding in AWS versus TFA. Discussion considers possible factors in slowed semantic encoding in AWS and how fluency might be impacted by slowed semantic encoding.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0309/2656220/Semantic-and-Phonological-Encoding-Times-in-Adults
via IFTTT

Semantic and Phonological Encoding Times in Adults Who Stutter: Brain Electrophysiological Evidence

Purpose
Some psycholinguistic theories of stuttering propose that language production operates along a different time course in adults who stutter (AWS) versus typically fluent adults (TFA). However, behavioral evidence for such a difference has been mixed. Here, the time course of semantic and phonological encoding in picture naming was compared in AWS (n = 16) versus TFA (n = 16) by measuring 2 event-related potential (ERP) components: NoGo N200, an ERP index of response inhibition, and lateralized readiness potential, an ERP index of response preparation.
Method
Each trial required a semantic judgment about a picture in addition to a phonemic judgment about the target label of the picture. Judgments were mapped onto a dual-choice (Go–NoGo/left–right) push-button response paradigm. On each trial, ERP activity time-locked to picture onset was recorded at 32 scalp electrodes.
Results
NoGo N200 was detected earlier to semantic NoGo trials than to phonemic NoGo trials in both groups, replicating previous evidence that semantic encoding generally precedes phonological encoding in language production. Moreover, N200 onset was earlier to semantic NoGo trials in TFA than in AWS, indicating that semantic information triggering response inhibition became available earlier in TFA versus AWS. In contrast, the time course of N200 activity to phonemic NoGo trials did not differ between groups. Lateralized readiness potential activity was influenced by strategic response preparation and, thus, could not be used to index real-time semantic and phonological encoding.
Conclusion
NoGo N200 results point to slowed semantic encoding in AWS versus TFA. Discussion considers possible factors in slowed semantic encoding in AWS and how fluency might be impacted by slowed semantic encoding.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0309/2656220/Semantic-and-Phonological-Encoding-Times-in-Adults
via IFTTT

Semantic and Phonological Encoding Times in Adults Who Stutter: Brain Electrophysiological Evidence

Purpose
Some psycholinguistic theories of stuttering propose that language production operates along a different time course in adults who stutter (AWS) versus typically fluent adults (TFA). However, behavioral evidence for such a difference has been mixed. Here, the time course of semantic and phonological encoding in picture naming was compared in AWS (n = 16) versus TFA (n = 16) by measuring 2 event-related potential (ERP) components: NoGo N200, an ERP index of response inhibition, and lateralized readiness potential, an ERP index of response preparation.
Method
Each trial required a semantic judgment about a picture in addition to a phonemic judgment about the target label of the picture. Judgments were mapped onto a dual-choice (Go–NoGo/left–right) push-button response paradigm. On each trial, ERP activity time-locked to picture onset was recorded at 32 scalp electrodes.
Results
NoGo N200 was detected earlier to semantic NoGo trials than to phonemic NoGo trials in both groups, replicating previous evidence that semantic encoding generally precedes phonological encoding in language production. Moreover, N200 onset was earlier to semantic NoGo trials in TFA than in AWS, indicating that semantic information triggering response inhibition became available earlier in TFA versus AWS. In contrast, the time course of N200 activity to phonemic NoGo trials did not differ between groups. Lateralized readiness potential activity was influenced by strategic response preparation and, thus, could not be used to index real-time semantic and phonological encoding.
Conclusion
NoGo N200 results point to slowed semantic encoding in AWS versus TFA. Discussion considers possible factors in slowed semantic encoding in AWS and how fluency might be impacted by slowed semantic encoding.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0309/2656220/Semantic-and-Phonological-Encoding-Times-in-Adults
via IFTTT

Corrigendum to “Place dependent stimulation rates improve pitch perception in cochlear implantees with single-sided deafness” [Hear. Res. 339 (2016) 94–103]

alertIcon.gif

Publication date: October 2017
Source:Hearing Research, Volume 354
Author(s): Tobias Rader, Julia Döge, Youssef Adel, Tobias Weissgerber, Uwe Baumann




from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xrNphJ
via IFTTT

Corrigendum to “Place dependent stimulation rates improve pitch perception in cochlear implantees with single-sided deafness” [Hear. Res. 339 (2016) 94–103]

alertIcon.gif

Publication date: October 2017
Source:Hearing Research, Volume 354
Author(s): Tobias Rader, Julia Döge, Youssef Adel, Tobias Weissgerber, Uwe Baumann




from #Audiology via ola Kala on Inoreader http://ift.tt/2xrNphJ
via IFTTT

Corrigendum to “Place dependent stimulation rates improve pitch perception in cochlear implantees with single-sided deafness” [Hear. Res. 339 (2016) 94–103]

alertIcon.gif

Publication date: October 2017
Source:Hearing Research, Volume 354
Author(s): Tobias Rader, Julia Döge, Youssef Adel, Tobias Weissgerber, Uwe Baumann




from #Audiology via ola Kala on Inoreader http://ift.tt/2xrNphJ
via IFTTT

Are You a Slave to Your E-mail? 

Do you find yourself constantly checking it throughout the day? Does it keep you from getting other more important tasks done? If so, you may want to read Paul Argenti's Harvard Business Review's article titled Stop Letting Email Control Your Work Day. Professor Argenti first suggests taking a look at all your work tasks and dividing them up into four categories.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2fuh3fr
via IFTTT

Σάββατο 23 Σεπτεμβρίου 2017

Sound wave propagation on the human skull surface with bone conduction stimulation

S03785955.gif

Publication date: Available online 23 September 2017
Source:Hearing Research
Author(s): Ivo Dobrev, Jae Hoon Sim, Stefan Stenfelt, Sebastian Ihrle, Rahel Gerig, Flurin Pfiffner, Albrecht Eiber, Alexander M. Huber, Christof Röösli
BackgroundBone conduction (BC) is an alternative to air conduction to stimulate the inner ear. In general, the stimulation for BC occurs on a specific location directly on the skull bone or through the skin covering the skull bone. The stimulation propagates to the ipsilateral and contralateral cochlea, mainly via the skull bone and possibly via other skull contents. This study aims to investigate the wave propagation on the surface of the skull bone during BC stimulation at the forehead and at ipsilateral mastoid.MethodsMeasurements were performed in five human cadaveric whole heads. The electro-magnetic transducer from a BCHA (bone conducting hearing aid), a Baha® Cordelle II transducer in particular, was attached to a percutaneously implanted screw or positioned with a 5-Newton steel headband at the mastoid and forehead. The Baha transducer was driven directly with single tone signals in the frequency range of 0.25–8 kHz, while skull bone vibrations were measured at multiple points on the skull using a scanning laser Doppler vibrometer (SLDV) system and a 3D LDV system. The 3D velocity components, defined by the 3D LDV measurement coordinate system, have been transformed into tangent (in-plane) and normal (out-of-plane) components in a local intrinsic coordinate system at each measurement point, which is based on the cadaver head's shape, estimated by the spatial locations of all measurement points.ResultsRigid-body-like motion was dominant at low frequencies below 1 kHz, and clear transverse traveling waves were observed at high frequencies above 2 kHz for both measurement systems. The surface waves propagation speeds were approximately 450 m/s at 8 kHz, corresponding trans-cranial time interval of 0.4 ms. The 3D velocity measurements confirmed the complex space and frequency dependent response of the cadaver heads indicated by the 1D data from the SLDV system. Comparison between the tangent and normal motion components, extracted by transforming the 3D velocity components into a local coordinate system, indicates that the normal component, with spatially varying phase, is dominant above 2 kHz, consistent with local bending vibration modes and traveling surface waves.ConclusionBoth SLDV and 3D LDV data indicate that sound transmission in the skull bone causes rigid-body-like motion at low frequencies whereas transverse deformations and travelling waves were observed above 2 kHz, with propagation speeds of approximately of 450 m/s at 8 kHz.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xqgQ1s
via IFTTT

Sound wave propagation on the human skull surface with bone conduction stimulation

S03785955.gif

Publication date: Available online 23 September 2017
Source:Hearing Research
Author(s): Ivo Dobrev, Jae Hoon Sim, Stefan Stenfelt, Sebastian Ihrle, Rahel Gerig, Flurin Pfiffner, Albrecht Eiber, Alexander M. Huber, Christof Röösli
BackgroundBone conduction (BC) is an alternative to air conduction to stimulate the inner ear. In general, the stimulation for BC occurs on a specific location directly on the skull bone or through the skin covering the skull bone. The stimulation propagates to the ipsilateral and contralateral cochlea, mainly via the skull bone and possibly via other skull contents. This study aims to investigate the wave propagation on the surface of the skull bone during BC stimulation at the forehead and at ipsilateral mastoid.MethodsMeasurements were performed in five human cadaveric whole heads. The electro-magnetic transducer from a BCHA (bone conducting hearing aid), a Baha® Cordelle II transducer in particular, was attached to a percutaneously implanted screw or positioned with a 5-Newton steel headband at the mastoid and forehead. The Baha transducer was driven directly with single tone signals in the frequency range of 0.25–8 kHz, while skull bone vibrations were measured at multiple points on the skull using a scanning laser Doppler vibrometer (SLDV) system and a 3D LDV system. The 3D velocity components, defined by the 3D LDV measurement coordinate system, have been transformed into tangent (in-plane) and normal (out-of-plane) components in a local intrinsic coordinate system at each measurement point, which is based on the cadaver head's shape, estimated by the spatial locations of all measurement points.ResultsRigid-body-like motion was dominant at low frequencies below 1 kHz, and clear transverse traveling waves were observed at high frequencies above 2 kHz for both measurement systems. The surface waves propagation speeds were approximately 450 m/s at 8 kHz, corresponding trans-cranial time interval of 0.4 ms. The 3D velocity measurements confirmed the complex space and frequency dependent response of the cadaver heads indicated by the 1D data from the SLDV system. Comparison between the tangent and normal motion components, extracted by transforming the 3D velocity components into a local coordinate system, indicates that the normal component, with spatially varying phase, is dominant above 2 kHz, consistent with local bending vibration modes and traveling surface waves.ConclusionBoth SLDV and 3D LDV data indicate that sound transmission in the skull bone causes rigid-body-like motion at low frequencies whereas transverse deformations and travelling waves were observed above 2 kHz, with propagation speeds of approximately of 450 m/s at 8 kHz.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xqgQ1s
via IFTTT

Sound wave propagation on the human skull surface with bone conduction stimulation

S03785955.gif

Publication date: Available online 23 September 2017
Source:Hearing Research
Author(s): Ivo Dobrev, Jae Hoon Sim, Stefan Stenfelt, Sebastian Ihrle, Rahel Gerig, Flurin Pfiffner, Albrecht Eiber, Alexander M. Huber, Christof Röösli
BackgroundBone conduction (BC) is an alternative to air conduction to stimulate the inner ear. In general, the stimulation for BC occurs on a specific location directly on the skull bone or through the skin covering the skull bone. The stimulation propagates to the ipsilateral and contralateral cochlea, mainly via the skull bone and possibly via other skull contents. This study aims to investigate the wave propagation on the surface of the skull bone during BC stimulation at the forehead and at ipsilateral mastoid.MethodsMeasurements were performed in five human cadaveric whole heads. The electro-magnetic transducer from a BCHA (bone conducting hearing aid), a Baha® Cordelle II transducer in particular, was attached to a percutaneously implanted screw or positioned with a 5-Newton steel headband at the mastoid and forehead. The Baha transducer was driven directly with single tone signals in the frequency range of 0.25–8 kHz, while skull bone vibrations were measured at multiple points on the skull using a scanning laser Doppler vibrometer (SLDV) system and a 3D LDV system. The 3D velocity components, defined by the 3D LDV measurement coordinate system, have been transformed into tangent (in-plane) and normal (out-of-plane) components in a local intrinsic coordinate system at each measurement point, which is based on the cadaver head's shape, estimated by the spatial locations of all measurement points.ResultsRigid-body-like motion was dominant at low frequencies below 1 kHz, and clear transverse traveling waves were observed at high frequencies above 2 kHz for both measurement systems. The surface waves propagation speeds were approximately 450 m/s at 8 kHz, corresponding trans-cranial time interval of 0.4 ms. The 3D velocity measurements confirmed the complex space and frequency dependent response of the cadaver heads indicated by the 1D data from the SLDV system. Comparison between the tangent and normal motion components, extracted by transforming the 3D velocity components into a local coordinate system, indicates that the normal component, with spatially varying phase, is dominant above 2 kHz, consistent with local bending vibration modes and traveling surface waves.ConclusionBoth SLDV and 3D LDV data indicate that sound transmission in the skull bone causes rigid-body-like motion at low frequencies whereas transverse deformations and travelling waves were observed above 2 kHz, with propagation speeds of approximately of 450 m/s at 8 kHz.



from #Audiology via ola Kala on Inoreader http://ift.tt/2xqgQ1s
via IFTTT

Παρασκευή 22 Σεπτεμβρίου 2017

Validating a Rapid, Automated Test of Spatial Release From Masking

Purpose
To evaluate the test–retest reliability of a headphone-based spatial release from a masking task with two maskers (referred to here as the SR2) and to describe its relationship to the same test done over loudspeakers in an anechoic chamber (the SR2A). We explore what thresholds tell us about certain populations (such as older individuals or individuals with hearing impairment) and discuss how the SR2 might be useful in the clinic.
Method
Fifty-four participants completed speech intelligibility tests in which a target phrase and two masking phrases from the Coordinate Response Measure corpus (Bolia, Nelson, Ericson, & Simpson, 2000) were presented either via earphones using a virtual spatial array or via loudspeakers in an anechoic chamber. For the SR2, the target sentence was always at 0° azimuth angle, and the maskers were either colocated at 0° or positioned at ± 45°. For the SR2A, the target was located at 0°, and the maskers were colocated or located at ± 15°, ± 30°, ± 45°, ± 90°, or ± 135°. Spatial release from masking was determined as the difference between thresholds in the colocated condition and each spatially separated condition. All participants completed the SR2 at least twice, and 29 of the individuals who completed the SR2 at least twice also participated in the SR2A. In a second experiment, 40 participants completed the SR2 8 times, and the changes in performance were evaluated as a function of test repetition.
Results
Mean thresholds were slightly better on the SR2 after the first repetition but were consistent across 8 subsequent testing sessions. Performance was consistent for the SR2A, regardless of the number of times testing was repeated. The SR2, which simulates 45° separations of target and maskers, produced spatially separated thresholds that were similar to thresholds obtained with 30° of separation in the anechoic chamber. Over headphones and in the anechoic chamber, pure-tone average was a strong predictor of spatial release, whereas age only reached significance for colocated conditions.
Conclusions
The SR2 is a reliable and effective method of testing spatial release from masking, suitable for screening abnormal listening abilities and for tracking rehabilitation over time. Future work should focus on developing and validating rapid, automated testing to identify the ability of listeners to benefit from high-frequency amplification, smaller spatial separations, and larger spectral differences among talkers.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-17-0013/2655026/Validating-a-Rapid-Automated-Test-of-Spatial
via IFTTT

Characteristics and Treatment Outcomes of Benign Paroxysmal Positional Vertigo in a Cohort of Veterans

Background
The Mountain Home Veterans Affairs (VA) Medical Center has been diagnosing and treating veterans with benign paroxysmal positional vertigo (BPPV) for almost 2 decades. The clinic protocol includes a 2-week follow-up visit to determine the treatment outcome of the canalith repositioning treatment (CRT). To date, the characteristics of BPPV and treatment efficacy have not been reported in a cohort of veterans with BPPV.
Purpose
To determine the prevalence and characteristics of veterans diagnosed with BPPV in a Veterans Affairs Medical Center Audiology Clinic and to examine treatment outcomes.
Research Design
Retrospective chart review.
Study Sample
A total of 102 veterans who tested positive for BPPV in the Vestibular Clinic at the Mountain Home VA Medical Center from March 2010 to August 2011.
Results
In 102 veterans who were diagnosed with BPPV, the posterior semicircular canal was most often involved (75%), motion-provoked vertigo was the most common symptom (84%), and the majority (43%) were diagnosed with BPPV in their sixth decade. The prevalence of BPPV in the Audiology Vestibular Clinic was 15.6%. Forty-one percent of veterans reported a symptom onset within 12 months of treatment for BPPV; however, 36% reported their symptoms began > 36 months prior to treatment. CRT was effective (negative Dix–Hallpike/roll test) in most veterans (86%) following 1 treatment appointment (M = 1.6), but more than half reported incomplete symptom resolution (residual dizziness) at the follow-up appointment. Eighteen percent of veterans experienced a recurrence (M = 1.8 years; SD = 1.7 years).
Conclusions
The characteristics and treatment outcomes of BPPV in our veteran cohort was similar to what has been reported in the general population. Future work should focus on improving the timeliness of evaluation and treatment of BPPV and examining the time course and management of residual dizziness.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0118/2654846/Characteristics-and-Treatment-Outcomes-of-Benign
via IFTTT

Validating a Rapid, Automated Test of Spatial Release From Masking

Purpose
To evaluate the test–retest reliability of a headphone-based spatial release from a masking task with two maskers (referred to here as the SR2) and to describe its relationship to the same test done over loudspeakers in an anechoic chamber (the SR2A). We explore what thresholds tell us about certain populations (such as older individuals or individuals with hearing impairment) and discuss how the SR2 might be useful in the clinic.
Method
Fifty-four participants completed speech intelligibility tests in which a target phrase and two masking phrases from the Coordinate Response Measure corpus (Bolia, Nelson, Ericson, & Simpson, 2000) were presented either via earphones using a virtual spatial array or via loudspeakers in an anechoic chamber. For the SR2, the target sentence was always at 0° azimuth angle, and the maskers were either colocated at 0° or positioned at ± 45°. For the SR2A, the target was located at 0°, and the maskers were colocated or located at ± 15°, ± 30°, ± 45°, ± 90°, or ± 135°. Spatial release from masking was determined as the difference between thresholds in the colocated condition and each spatially separated condition. All participants completed the SR2 at least twice, and 29 of the individuals who completed the SR2 at least twice also participated in the SR2A. In a second experiment, 40 participants completed the SR2 8 times, and the changes in performance were evaluated as a function of test repetition.
Results
Mean thresholds were slightly better on the SR2 after the first repetition but were consistent across 8 subsequent testing sessions. Performance was consistent for the SR2A, regardless of the number of times testing was repeated. The SR2, which simulates 45° separations of target and maskers, produced spatially separated thresholds that were similar to thresholds obtained with 30° of separation in the anechoic chamber. Over headphones and in the anechoic chamber, pure-tone average was a strong predictor of spatial release, whereas age only reached significance for colocated conditions.
Conclusions
The SR2 is a reliable and effective method of testing spatial release from masking, suitable for screening abnormal listening abilities and for tracking rehabilitation over time. Future work should focus on developing and validating rapid, automated testing to identify the ability of listeners to benefit from high-frequency amplification, smaller spatial separations, and larger spectral differences among talkers.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-17-0013/2655026/Validating-a-Rapid-Automated-Test-of-Spatial
via IFTTT

Characteristics and Treatment Outcomes of Benign Paroxysmal Positional Vertigo in a Cohort of Veterans

Background
The Mountain Home Veterans Affairs (VA) Medical Center has been diagnosing and treating veterans with benign paroxysmal positional vertigo (BPPV) for almost 2 decades. The clinic protocol includes a 2-week follow-up visit to determine the treatment outcome of the canalith repositioning treatment (CRT). To date, the characteristics of BPPV and treatment efficacy have not been reported in a cohort of veterans with BPPV.
Purpose
To determine the prevalence and characteristics of veterans diagnosed with BPPV in a Veterans Affairs Medical Center Audiology Clinic and to examine treatment outcomes.
Research Design
Retrospective chart review.
Study Sample
A total of 102 veterans who tested positive for BPPV in the Vestibular Clinic at the Mountain Home VA Medical Center from March 2010 to August 2011.
Results
In 102 veterans who were diagnosed with BPPV, the posterior semicircular canal was most often involved (75%), motion-provoked vertigo was the most common symptom (84%), and the majority (43%) were diagnosed with BPPV in their sixth decade. The prevalence of BPPV in the Audiology Vestibular Clinic was 15.6%. Forty-one percent of veterans reported a symptom onset within 12 months of treatment for BPPV; however, 36% reported their symptoms began > 36 months prior to treatment. CRT was effective (negative Dix–Hallpike/roll test) in most veterans (86%) following 1 treatment appointment (M = 1.6), but more than half reported incomplete symptom resolution (residual dizziness) at the follow-up appointment. Eighteen percent of veterans experienced a recurrence (M = 1.8 years; SD = 1.7 years).
Conclusions
The characteristics and treatment outcomes of BPPV in our veteran cohort was similar to what has been reported in the general population. Future work should focus on improving the timeliness of evaluation and treatment of BPPV and examining the time course and management of residual dizziness.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0118/2654846/Characteristics-and-Treatment-Outcomes-of-Benign
via IFTTT

Validating a Rapid, Automated Test of Spatial Release From Masking

Purpose
To evaluate the test–retest reliability of a headphone-based spatial release from a masking task with two maskers (referred to here as the SR2) and to describe its relationship to the same test done over loudspeakers in an anechoic chamber (the SR2A). We explore what thresholds tell us about certain populations (such as older individuals or individuals with hearing impairment) and discuss how the SR2 might be useful in the clinic.
Method
Fifty-four participants completed speech intelligibility tests in which a target phrase and two masking phrases from the Coordinate Response Measure corpus (Bolia, Nelson, Ericson, & Simpson, 2000) were presented either via earphones using a virtual spatial array or via loudspeakers in an anechoic chamber. For the SR2, the target sentence was always at 0° azimuth angle, and the maskers were either colocated at 0° or positioned at ± 45°. For the SR2A, the target was located at 0°, and the maskers were colocated or located at ± 15°, ± 30°, ± 45°, ± 90°, or ± 135°. Spatial release from masking was determined as the difference between thresholds in the colocated condition and each spatially separated condition. All participants completed the SR2 at least twice, and 29 of the individuals who completed the SR2 at least twice also participated in the SR2A. In a second experiment, 40 participants completed the SR2 8 times, and the changes in performance were evaluated as a function of test repetition.
Results
Mean thresholds were slightly better on the SR2 after the first repetition but were consistent across 8 subsequent testing sessions. Performance was consistent for the SR2A, regardless of the number of times testing was repeated. The SR2, which simulates 45° separations of target and maskers, produced spatially separated thresholds that were similar to thresholds obtained with 30° of separation in the anechoic chamber. Over headphones and in the anechoic chamber, pure-tone average was a strong predictor of spatial release, whereas age only reached significance for colocated conditions.
Conclusions
The SR2 is a reliable and effective method of testing spatial release from masking, suitable for screening abnormal listening abilities and for tracking rehabilitation over time. Future work should focus on developing and validating rapid, automated testing to identify the ability of listeners to benefit from high-frequency amplification, smaller spatial separations, and larger spectral differences among talkers.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_AJA-17-0013/2655026/Validating-a-Rapid-Automated-Test-of-Spatial
via IFTTT

Characteristics and Treatment Outcomes of Benign Paroxysmal Positional Vertigo in a Cohort of Veterans

Background
The Mountain Home Veterans Affairs (VA) Medical Center has been diagnosing and treating veterans with benign paroxysmal positional vertigo (BPPV) for almost 2 decades. The clinic protocol includes a 2-week follow-up visit to determine the treatment outcome of the canalith repositioning treatment (CRT). To date, the characteristics of BPPV and treatment efficacy have not been reported in a cohort of veterans with BPPV.
Purpose
To determine the prevalence and characteristics of veterans diagnosed with BPPV in a Veterans Affairs Medical Center Audiology Clinic and to examine treatment outcomes.
Research Design
Retrospective chart review.
Study Sample
A total of 102 veterans who tested positive for BPPV in the Vestibular Clinic at the Mountain Home VA Medical Center from March 2010 to August 2011.
Results
In 102 veterans who were diagnosed with BPPV, the posterior semicircular canal was most often involved (75%), motion-provoked vertigo was the most common symptom (84%), and the majority (43%) were diagnosed with BPPV in their sixth decade. The prevalence of BPPV in the Audiology Vestibular Clinic was 15.6%. Forty-one percent of veterans reported a symptom onset within 12 months of treatment for BPPV; however, 36% reported their symptoms began > 36 months prior to treatment. CRT was effective (negative Dix–Hallpike/roll test) in most veterans (86%) following 1 treatment appointment (M = 1.6), but more than half reported incomplete symptom resolution (residual dizziness) at the follow-up appointment. Eighteen percent of veterans experienced a recurrence (M = 1.8 years; SD = 1.7 years).
Conclusions
The characteristics and treatment outcomes of BPPV in our veteran cohort was similar to what has been reported in the general population. Future work should focus on improving the timeliness of evaluation and treatment of BPPV and examining the time course and management of residual dizziness.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_AJA-16-0118/2654846/Characteristics-and-Treatment-Outcomes-of-Benign
via IFTTT

How African American English-Speaking First Graders Segment and Rhyme Words and Nonwords With Final Consonant Clusters

Purpose
This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant–vowel–consonant–consonant (CVCC) words and nonwords.
Method
Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model.
Results
The AAE group had significantly more responses that rhymed CVCC words with consonant–vowel–consonant words and segmented CVCC words as consonant–vowel–consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model.
Conclusion
Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_LSHSS-16-0062/2655088/How-African-American-EnglishSpeaking-First-Graders
via IFTTT

Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study

Purpose
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period.
Method
Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups.
Results
Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print.
Conclusions
Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_LSHSS-17-0023/2655089/Emergent-Literacy-Skills-in-Preschool-Children
via IFTTT

How African American English-Speaking First Graders Segment and Rhyme Words and Nonwords With Final Consonant Clusters

Purpose
This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant–vowel–consonant–consonant (CVCC) words and nonwords.
Method
Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model.
Results
The AAE group had significantly more responses that rhymed CVCC words with consonant–vowel–consonant words and segmented CVCC words as consonant–vowel–consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model.
Conclusion
Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_LSHSS-16-0062/2655088/How-African-American-EnglishSpeaking-First-Graders
via IFTTT

Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study

Purpose
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period.
Method
Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups.
Results
Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print.
Conclusions
Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_LSHSS-17-0023/2655089/Emergent-Literacy-Skills-in-Preschool-Children
via IFTTT

How African American English-Speaking First Graders Segment and Rhyme Words and Nonwords With Final Consonant Clusters

Purpose
This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant–vowel–consonant–consonant (CVCC) words and nonwords.
Method
Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model.
Results
The AAE group had significantly more responses that rhymed CVCC words with consonant–vowel–consonant words and segmented CVCC words as consonant–vowel–consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model.
Conclusion
Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_LSHSS-16-0062/2655088/How-African-American-EnglishSpeaking-First-Graders
via IFTTT

Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study

Purpose
The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period.
Method
Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups.
Results
Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print.
Conclusions
Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_LSHSS-17-0023/2655089/Emergent-Literacy-Skills-in-Preschool-Children
via IFTTT

The heterospecific calling song can improve conspecific signal detection in a bushcricket species

grey_pxl.gif

Publication date: Available online 21 September 2017
Source:Hearing Research
Author(s): Zainab A.S. Abdelatti, Manfred Hartbauer
In forest clearings of the Malaysian rainforest, chirping and trilling Mecopoda species often live in sympatry. We investigated whether a phenomenon known as stochastic resonance (SR) improved the ability of individuals to detect a low-frequent signal component typical of chirps when members of the heterospecific trilling species were simultaneously active. This phenomenon may explain the fact that the chirping species upholds entrainment to the conspecific song in the presence of the trill. Therefore, we evaluated the response probability of an ascending auditory neuron (TN-1) in individuals of the chirping Mecopoda species to triple-pulsed 2, 8 and 20 kHz signals that were broadcast 1 dB below the hearing threshold while increasing the intensity of either white noise or a typical triller song.Our results demonstrate the existence of SR over a rather broad range of signal-to-noise ratios (SNRs) of input signals when periodic 2 kHz and 20 kHz signals were presented at the same time as white noise. Using the chirp-specific 2 kHz signal as a stimulus, the maximum TN-1 response probability frequently exceeded the 50% threshold if the trill was broadcast simultaneously. Playback of an 8 kHz signal, a common frequency band component of the trill, yielded a similar result. Nevertheless, using the trill as a masker, the signal-related TN-1 spiking probability was rather variable. The variability on an individual level resulted from correlations between the phase relationship of the signal and syllables of the trill. For the first time, these results demonstrate the existence of SR in acoustically-communicating insects and suggest that the calling song of heterospecifics may facilitate the detection of a subthreshold signal component in certain situations. The results of the simulation of sound propagation in a computer model suggest a wide range of sender-receiver distances in which the triller can help to improve the detection of subthreshold signals in the chirping species.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2hllkT3
via IFTTT

Pulsatile tinnitus: Causes, symptoms, and treatment

Tinnitus is where a person hears sounds within the ear, such as ringing. Pulsatile tinnitus is when the sounds are in time to the beat of their pulse.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jR4Rqw
via IFTTT

The heterospecific calling song can improve conspecific signal detection in a bushcricket species

grey_pxl.gif

Publication date: Available online 21 September 2017
Source:Hearing Research
Author(s): Zainab A.S. Abdelatti, Manfred Hartbauer
In forest clearings of the Malaysian rainforest, chirping and trilling Mecopoda species often live in sympatry. We investigated whether a phenomenon known as stochastic resonance (SR) improved the ability of individuals to detect a low-frequent signal component typical of chirps when members of the heterospecific trilling species were simultaneously active. This phenomenon may explain the fact that the chirping species upholds entrainment to the conspecific song in the presence of the trill. Therefore, we evaluated the response probability of an ascending auditory neuron (TN-1) in individuals of the chirping Mecopoda species to triple-pulsed 2, 8 and 20 kHz signals that were broadcast 1 dB below the hearing threshold while increasing the intensity of either white noise or a typical triller song.Our results demonstrate the existence of SR over a rather broad range of signal-to-noise ratios (SNRs) of input signals when periodic 2 kHz and 20 kHz signals were presented at the same time as white noise. Using the chirp-specific 2 kHz signal as a stimulus, the maximum TN-1 response probability frequently exceeded the 50% threshold if the trill was broadcast simultaneously. Playback of an 8 kHz signal, a common frequency band component of the trill, yielded a similar result. Nevertheless, using the trill as a masker, the signal-related TN-1 spiking probability was rather variable. The variability on an individual level resulted from correlations between the phase relationship of the signal and syllables of the trill. For the first time, these results demonstrate the existence of SR in acoustically-communicating insects and suggest that the calling song of heterospecifics may facilitate the detection of a subthreshold signal component in certain situations. The results of the simulation of sound propagation in a computer model suggest a wide range of sender-receiver distances in which the triller can help to improve the detection of subthreshold signals in the chirping species.



from #Audiology via ola Kala on Inoreader http://ift.tt/2hllkT3
via IFTTT

The heterospecific calling song can improve conspecific signal detection in a bushcricket species

grey_pxl.gif

Publication date: Available online 21 September 2017
Source:Hearing Research
Author(s): Zainab A.S. Abdelatti, Manfred Hartbauer
In forest clearings of the Malaysian rainforest, chirping and trilling Mecopoda species often live in sympatry. We investigated whether a phenomenon known as stochastic resonance (SR) improved the ability of individuals to detect a low-frequent signal component typical of chirps when members of the heterospecific trilling species were simultaneously active. This phenomenon may explain the fact that the chirping species upholds entrainment to the conspecific song in the presence of the trill. Therefore, we evaluated the response probability of an ascending auditory neuron (TN-1) in individuals of the chirping Mecopoda species to triple-pulsed 2, 8 and 20 kHz signals that were broadcast 1 dB below the hearing threshold while increasing the intensity of either white noise or a typical triller song.Our results demonstrate the existence of SR over a rather broad range of signal-to-noise ratios (SNRs) of input signals when periodic 2 kHz and 20 kHz signals were presented at the same time as white noise. Using the chirp-specific 2 kHz signal as a stimulus, the maximum TN-1 response probability frequently exceeded the 50% threshold if the trill was broadcast simultaneously. Playback of an 8 kHz signal, a common frequency band component of the trill, yielded a similar result. Nevertheless, using the trill as a masker, the signal-related TN-1 spiking probability was rather variable. The variability on an individual level resulted from correlations between the phase relationship of the signal and syllables of the trill. For the first time, these results demonstrate the existence of SR in acoustically-communicating insects and suggest that the calling song of heterospecifics may facilitate the detection of a subthreshold signal component in certain situations. The results of the simulation of sound propagation in a computer model suggest a wide range of sender-receiver distances in which the triller can help to improve the detection of subthreshold signals in the chirping species.



from #Audiology via ola Kala on Inoreader http://ift.tt/2hllkT3
via IFTTT

Pulsatile tinnitus: Causes, symptoms, and treatment

Tinnitus is where a person hears sounds within the ear, such as ringing. Pulsatile tinnitus is when the sounds are in time to the beat of their pulse.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jR4Rqw
via IFTTT

Pulsatile tinnitus: Causes, symptoms, and treatment

Tinnitus is where a person hears sounds within the ear, such as ringing. Pulsatile tinnitus is when the sounds are in time to the beat of their pulse.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jR4Rqw
via IFTTT

Does Standing at Work Help Performance?

How much time do you spend sitting every day? Between commuting, working, eating, watching tv, etc., the sedentary hours add up. The negative health effects of long hours of sit time are fairly well known and include chronic disease, and psychological concerns. Seeing a potential market, several companies designed standing workstations, even desks with treadmills attached. While these sit-to-stand desks are trending, is there any evidence to support their use?



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2hotO8o
via IFTTT

Πέμπτη 21 Σεπτεμβρίου 2017

The effects of repeated low-level blast exposure on hearing in marines

arrow_top.gif

Lina R Kubli, Robin L Pinto, Holly L Burrows, Philip D Littlefield, Douglas S Brungart

Noise and Health 2017 19(90):227-238

Background: The study evaluates a group of Military Service Members specialized in blast explosive training called “Breachers” who are routinely exposed to multiple low-level blasts while teaching breaching at the U.S. Marine Corps in Quantico Virginia. The objective of this study was to determine if there are any acute or long-term auditory changes due to repeated low-level blast exposures used in training. The performance of the instructor group “Breachers” was compared to a control group, “Engineers”. Methods: A total of 11 Breachers and four engineers were evaluated in the study. The participants received comprehensive auditory tests, including pure-tone testing, speech-in-noise (SIN) measures, and central auditory behavioral and objective tests using early and late (P300) auditory evoked potentials over a period of 17 months. They also received shorter assessments immediately following the blast-exposure onsite at Quantico. Results: No acute or longitudinal effects were identified. However, there were some interesting baseline effects found in both groups. Contrary to the expected, the onsite hearing thresholds and distortion product otoacoustic emissions were slightly better at a few frequencies immediately after blast-exposure than measurements obtained with the same equipment weeks to months after each blast-exposure. Conclusions: To date, the current study is the most comprehensive study that evaluates the long-term effects of blast-exposure on hearing. Despite extensive testing to assess changes, the findings of this study suggest that the levels of current exposures used in this military training environment do not seem to have an obvious deleterious effect on hearing.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2feFbPz
via IFTTT

Chronic noise exposure in the spontaneously hypertensive rat

arrow_top.gif

Anne T.M Konkle, Stephen E Keith, James P McNamee, David Michaud

Noise and Health 2017 19(90):213-221

Introduction: Epidemiological studies have suggested an association between the relative risk for developing cardiovascular disease (CVD) and long-term exposure to elevated levels of transportation noise. The contention is that this association is largely owing to an increase in stress-related biomarkers that are thought to be associated with CVD. Animal models have demonstrated that acute noise exposure is capable of triggering a stress response; however, similar studies using chronic noise models are less common. Materials and Methods: The current study assessed the effects of intermittent daily exposure to broadband 80 kHz bandwidth noise of 87.3 dBA for a period of 21 consecutive days in spontaneously hypertensive rats. Results: Twenty-one days of exposure to noise significantly reduced body weight relative to the sham and unhandled control groups; however, noise had no statistically significant impact on plasma adrenocorticotropic hormone (or adrenal gland weights). Noise was associated with a significant, albeit modest, increase in both corticosterone and aldosterone concentrations following the 21 days of exposure. Interleukin 1 and interleukin 6 levels were unchanged in the noise group, whereas both tumour necrosis factor alpha and C-reactive protein were significantly reduced in noise exposed rats. Tail blood sampling for corticosterone throughout the exposure period showed no appreciable difference between the noise and sham exposed animals, largely due to the sizeable variation for each group as well as the observed fluctuations over time. Discussion: The current pilot study provides only modest support that chronic noise may promote stress-related biological and/or developmental effects. More research is required to verify the current findings and resolve some of the unexpected observations.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2w9Sc3O
via IFTTT

Sex bias in basic and preclinical noise-induced hearing loss research

arrow_top.gif

Amanda Marie Lauer, Katrina Marie Schrode

Noise and Health 2017 19(90):207-212

Introduction: Sex differences in brain biochemistry, physiology, structure, and function have been gaining increasing attention in the scientific community. Males and females can have different responses to medications, diseases, and environmental variables. A small number of the approximately 7500 studies of noise-induced hearing loss (NIHL) have identified sex differences, but the mechanisms and characterization of these differences have not been thoroughly studied. The National Institutes of Health (NIH) issued a mandate in 2015 to include sex as a biological variable in all NIH-funded research beginning in January 2016. Materials and Methods: In the present study, the representation of sex as a biological variable in preclinical and basic studies of NIHL was quantified for a 5-year period from January 2011 to December 2015 prior to the implementation of the NIH mandate. Results: The analysis of 210 basic and preclinical studies showed that when sex is specified, experiments are predominantly performed on male animals. Discussion: This bias is present in studies completed in the United States and foreign institutions, and the proportion of studies using only male participants has actually increased over the 5-year period examined. Conclusion: These results underscore the need to invest resources in studying NIHL in both sexes to better understand how sex shapes the outcomes and to optimize treatment and prevention strategies.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2feOQpi
via IFTTT

Impact of usage of personal music systems on oto-acoustic emissions among medical students

arrow_top.gif

Prasanth G Narahari, Jayashree Bhat, Arivudai Nambi, Anshul Arora

Noise and Health 2017 19(90):222-226

Background: Intact hearing is essential for medical students and physicians for communicating with patients and appreciating internal sounds with a stethoscope. With the increased use of (PMSs), they are exposed to high sound levels and are at a risk of developing hearing loss. The effect of long term personal music system (PMS) usage on auditory sensitivity has been well established. Our study has reported the immediate and short term effect of PMS usage on hearing especially among medical professionals. Objective: To assess the effect of short term PMS usage on distortion product otoacoustic emissions (DPOAE) among medical professionals. Materials and Method: 34 medical students within the age range of 17–22 years who were regular users of PMS participated in the study. All participants had hearing thresholds <15 dBHL at audiometric octave frequencies. Baseline DPOAEs were measured in all participants after 18 h of non-usage of PMS. One week later DPOAEs were again measured after two hours of continuous listening to PMS. DPOAEs were measured within the frequency range of 2 to 12 kHz with a resolution of 12 points per octave. Output sound pressure level of the PMS of each participant was measured in HA-1 coupler and it was converted to free field SPL using the transformations of RECD and REUG. Results: Paired sample t test was used to investigate the main effect of short term music listening on DPOAE amplitudes. Analysis revealed no significant main effect of music listening on DPOAE amplitudes at the octave frequencies between 2 to 4 KHz (t67 = −1.02, P = 0.31) and 4 to 8 KHz (t67 = 0.24, P = 0.81). However, there was a small but statistically significant reduction in DPOAE amplitude (t67 = 2.10, P = 0.04) in the frequency range of 9 to 12 kHz following short term usage of PMS. The mean output sound pressure level of the PMS was 98.29. Conclusion: Short term exposure to music affects the DPOAE amplitude at high frequencies and this serves as an early indicator for noise induced hearing loss (NIHL). Analysis of output sound pressure level suggests that the PMSs of the participants have the capability to induce hearing loss if the individual listened to it at the maximum volume setting. Hence, the medical professionals need to be cautious while using PMS.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2w9m0gH
via IFTTT

Environmental noise exposure modifies astrocyte morphology in hippocampus of young male rats

arrow_top.gif

Odelie Huet-Bello, Yaveth Ruvalcaba-Delgadillo, Alfredo Feria-Velasco, Rocío E González-Castañeda, Joaquín Garcia-Estrada, Miguel A Macias-Islas, Fernando Jauregui-Huerta, Sonia Luquin

Noise and Health 2017 19(90):239-244

Background: Chronic exposure to noise induces changes on the central nervous system of exposed animals. Those changes affect not only the auditory system but also other structures indirectly related to audition. The hippocampus of young animals represents a potential target for these effects because of its essential role in individuals’ adaptation to environmental challenges. Objective: The aim of the present study was to evaluate hippocampus vulnerability, assessing astrocytic morphology in an experimental model of environmental noise (EN) applied to rats in pre-pubescent stage. Materials and Methods: Weaned Wistar male rats were subjected to EN adapted to the rats’ audiogram for 15 days, 24 h daily. Once completed, plasmatic corticosterone (CORT) concentration was quantified, and immunohistochemistry for glial fibrillary acidic protein was taken in hippocampal DG, CA3, and CA1 subareas. Immunopositive cells and astrocyte arborizations were counted and compared between groups. Results: The rats subjected to noise exhibited enlarged length of astrocytes arborizations in all hippocampal subareas. Those changes were accompanied by a marked rise in serum CORT levels. Conclusions: These findings confirm hippocampal vulnerability to EN and suggest that glial cells may play an important role in the adaptation of developing the participants to noise exposure.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2felCqp
via IFTTT