Πέμπτη 25 Αυγούστου 2016

Inhaled Mannitol as a Laryngeal and Bronchial Provocation Test

alertIcon.gif

Publication date: Available online 25 August 2016
Source:Journal of Voice
Author(s): Tunn Ren Tay, Ryan Hoy, Amanda L. Richards, Paul Paddle, Mark Hew
ObjectivesTimely diagnosis of vocal cord dysfunction (VCD), more recently termed “inducible laryngeal obstruction,” is important because VCD is often misdiagnosed as asthma, resulting in delayed diagnosis and inappropriate treatment. Visualization of paradoxical vocal cord movement on laryngoscopy is the gold standard for diagnosis, but is limited by poor test sensitivity. Provocation tests may improve the diagnosis of VCD, but the diagnostic performance of current tests is less than ideal. Alternative provocation tests are required. This pilot study demonstrates the feasibility of using inhaled mannitol for concurrent investigation of laryngeal and bronchial hyperresponsiveness.MethodsConsecutive patients with suspected VCD seen at our institution's asthma clinic underwent flexible laryngoscopy at baseline and following mannitol challenge. VCD was diagnosed on laryngoscopy based on inspiratory adduction, or >50% expiratory adduction of the vocal cords. Bronchial hyperresponsiveness after mannitol challenge was also assessed. We evaluated the interrater agreement of postmannitol laryngoscopy between respiratory specialists and laryngologists.ResultsFourteen patients with suspected VCD in the context of asthma evaluation were included in the study. Mannitol provocation demonstrated VCD in three of the seven patients with normal baseline laryngoscopy (42.9%). Only two patients had bronchial hyperresponsiveness. There was substantial interrater agreement between respiratory specialists and laryngologists, kappa = 0.696 (95% confidence interval: 0.324–1) (P = 0.006).ConclusionInhaled mannitol can be used to induce VCD. It is well tolerated and can evaluate laryngeal and bronchial hyperresponsiveness at the same setting.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2cdd8BC
via IFTTT

Vocal Health Education and Medical Resources for Graduate-Level Vocal Performance Students

alertIcon.gif

Publication date: Available online 25 August 2016
Source:Journal of Voice
Author(s): Katherine Latham, Barbara Messing, Melissa Bidlack, Samantha Merritt, Xian Zhou, Lee M. Akst
Objective/HypothesisMost agree that education about vocal health and physiology can help singers avoid the development of vocal disorders. However, little is known about how this kind of education is provided to singers as part of their formal training. This study describes the amount of instruction in these topics provided through graduate-level curricula, who provides this instruction, and the kinds of affiliations such graduate singing programs have with medical professionals.Study DesignThis is an online survey of music schools with graduate singing programs.MethodsSurvey questions addressed demographics of the programs, general attitudes about vocal health instruction for singers, the amount of vocal health instruction provided and by whom it was taught, perceived barriers to including more vocal health instruction, and any affiliations the voice program might have with medical personnel.ResultsEighty-one survey responses were received. Instruction on vocal health was provided in 95% of the schools. In 55% of the schools, none of this instruction was given by a medical professional. Limited time in the curriculum, lack of financial support, and lack of availability of medical professional were the most frequently reported barriers to providing more instruction. When programs offered more hours of instruction, they were more likely to have some of that instruction given by a medical professional (P = 0.008) and to assess the amount of instruction provided positively (P = 0.001).ConclusionThere are several perceived barriers to incorporating vocal health education into graduate singing programs. Opportunity exists for more collaboration between vocal pedagogues and medical professionals in the education of singers about vocal health.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bTBSeN
via IFTTT

Symmetric Electrode Spanning Narrows the Excitation Patterns of Partial Tripolar Stimuli in Cochlear Implants

Abstract

In cochlear implants (CIs), standard partial tripolar (pTP) mode reduces current spread by returning a fraction of the current to two adjacent flanking electrodes within the cochlea. Symmetric electrode spanning (i.e., separating both the apical and basal return electrodes from the main electrode by one electrode) has been shown to increase the pitch of pTP stimuli, when the ratio of intracochlear return current was fixed. To explain the pitch increase caused by symmetric spanning in pTP mode, this study measured the electrical potentials of both standard and symmetrically spanned pTP stimuli on a main electrode EL8 in five CI ears using electrical field imaging (EFI). In addition, the spatial profiles of evoked compound action potentials (ECAP) and the psychophysical forward masking (PFM) patterns were also measured for both stimuli. The EFI, ECAP, and PFM patterns of a given stimulus differed in shape details, reflecting the different levels of auditory processing and different ratios of intracochlear return current across the measurement methods. Compared to the standard pTP stimuli, the symmetrically spanned pTP stimuli significantly reduced the areas under the curves of the normalized EFI and PFM patterns, without shifting the pattern peaks and centroids (both around EL8). The more focused excitation patterns with symmetric spanning may have caused the previously reported pitch increase, due to an interaction between pitch and timbre perception. Being able to reduce the spread of excitation, pTP mode symmetric spanning is a promising stimulation strategy that may further increase spectral resolution and frequency selectivity with CIs.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bTcI03
via IFTTT

The Binaural Interaction Component in Barn Owl ( Tyto alba ) Presents few Differences to Mammalian Data

Abstract

The auditory brainstem response (ABR) is an evoked potential that reflects the responses to sound by brainstem neural centers. The binaural interaction component (BIC) is obtained by subtracting the sum of the monaural ABR responses from the binaural response. Its latency and amplitude change in response to variations in binaural cues. The BIC is thus thought to reflect the activity of binaural nuclei and is used to non-invasively test binaural processing. However, any conclusions are limited by a lack of knowledge of the relevant processes at the level of individual neurons. The aim of this study was to characterize the ABR and BIC in the barn owl, an animal where the ITD-processing neural circuits are known in great detail. We recorded ABR responses to chirps and to 1 and 4 kHz tones from anesthetized barn owls. General characteristics of the barn owl ABR were similar to those observed in other bird species. The most prominent peak of the BIC was associated with nucleus laminaris and is thus likely to reflect the known processes of ITD computation in this nucleus. However, the properties of the BIC were very similar to previously published mammalian data and did not reveal any specific diagnostic features. For example, the polarity of the BIC was negative, which indicates a smaller response to binaural stimulation than predicted by the sum of monaural responses. This is contrary to previous predictions for an excitatory-excitatory system such as nucleus laminaris. Similarly, the change in BIC latency with varying ITD was not distinguishable from mammalian data. Contrary to previous predictions, this behavior appears unrelated to the known underlying neural delay-line circuitry. In conclusion, the generation of the BIC is currently inadequately understood and common assumptions about the BIC need to be reconsidered when interpreting such measurements.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ccCijO
via IFTTT

The Role of Emergent Bilingualism in the Development of Morphological Awareness in Arabic and Hebrew

Purpose
The purpose of the present study was to investigate the role of dual language development and cross-linguistic influence on morphological awareness in young bilinguals' first language (L1) and second language (L2). We examined whether (a) the bilingual children (L1/L2 Arabic and L1/L2 Hebrew) precede their monolingual Hebrew- or Arabic-speaking peers in L1 and L2 morphological awareness, and (b) 1 Semitic language (Arabic) has cross-linguistic influence on another Semitic language (Hebrew) in morphological awareness.
Method
The study sample comprised 93 six-year-old children. The bilinguals had attended bilingual Hebrew−Arabic kindergartens for 1 academic year and were divided into 2 groups: home language Hebrew (L1) and home language Arabic (L1). These groups were compared to age-matched monolingual Hebrew speakers and monolingual Arabic speakers. We used nonwords similar in structure to familiar words in both target languages, representing 6 inflectional morphological categories.
Results
L1 Arabic and L1 Hebrew bilinguals performed significantly better than Arabic- and Hebrew-speaking monolinguals in the respective languages. Differences were not found between the bilingual groups. We found evidence of cross-linguistic transfer of morphological awareness from Arabic to Hebrew in 2 categories−bound possessives and dual number−probably because these categories are more salient in Palestinian Spoken Arabic than in Hebrew.
Conclusions
We conclude that children with even an initial exposure to L2 reveal acceleration of sensitivity to word structure in both of their languages. We suggest that this is due to the fact that two Semitic languages, Arabic and Hebrew, share a common core of linguistic features, together with favorable contextual factors and instructional factors.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2blZj2H
via IFTTT

The Role of Emergent Bilingualism in the Development of Morphological Awareness in Arabic and Hebrew

Purpose
The purpose of the present study was to investigate the role of dual language development and cross-linguistic influence on morphological awareness in young bilinguals' first language (L1) and second language (L2). We examined whether (a) the bilingual children (L1/L2 Arabic and L1/L2 Hebrew) precede their monolingual Hebrew- or Arabic-speaking peers in L1 and L2 morphological awareness, and (b) 1 Semitic language (Arabic) has cross-linguistic influence on another Semitic language (Hebrew) in morphological awareness.
Method
The study sample comprised 93 six-year-old children. The bilinguals had attended bilingual Hebrew−Arabic kindergartens for 1 academic year and were divided into 2 groups: home language Hebrew (L1) and home language Arabic (L1). These groups were compared to age-matched monolingual Hebrew speakers and monolingual Arabic speakers. We used nonwords similar in structure to familiar words in both target languages, representing 6 inflectional morphological categories.
Results
L1 Arabic and L1 Hebrew bilinguals performed significantly better than Arabic- and Hebrew-speaking monolinguals in the respective languages. Differences were not found between the bilingual groups. We found evidence of cross-linguistic transfer of morphological awareness from Arabic to Hebrew in 2 categories−bound possessives and dual number−probably because these categories are more salient in Palestinian Spoken Arabic than in Hebrew.
Conclusions
We conclude that children with even an initial exposure to L2 reveal acceleration of sensitivity to word structure in both of their languages. We suggest that this is due to the fact that two Semitic languages, Arabic and Hebrew, share a common core of linguistic features, together with favorable contextual factors and instructional factors.

from #Audiology via ola Kala on Inoreader http://ift.tt/2blZj2H
via IFTTT

The Role of Emergent Bilingualism in the Development of Morphological Awareness in Arabic and Hebrew

Purpose
The purpose of the present study was to investigate the role of dual language development and cross-linguistic influence on morphological awareness in young bilinguals' first language (L1) and second language (L2). We examined whether (a) the bilingual children (L1/L2 Arabic and L1/L2 Hebrew) precede their monolingual Hebrew- or Arabic-speaking peers in L1 and L2 morphological awareness, and (b) 1 Semitic language (Arabic) has cross-linguistic influence on another Semitic language (Hebrew) in morphological awareness.
Method
The study sample comprised 93 six-year-old children. The bilinguals had attended bilingual Hebrew−Arabic kindergartens for 1 academic year and were divided into 2 groups: home language Hebrew (L1) and home language Arabic (L1). These groups were compared to age-matched monolingual Hebrew speakers and monolingual Arabic speakers. We used nonwords similar in structure to familiar words in both target languages, representing 6 inflectional morphological categories.
Results
L1 Arabic and L1 Hebrew bilinguals performed significantly better than Arabic- and Hebrew-speaking monolinguals in the respective languages. Differences were not found between the bilingual groups. We found evidence of cross-linguistic transfer of morphological awareness from Arabic to Hebrew in 2 categories−bound possessives and dual number−probably because these categories are more salient in Palestinian Spoken Arabic than in Hebrew.
Conclusions
We conclude that children with even an initial exposure to L2 reveal acceleration of sensitivity to word structure in both of their languages. We suggest that this is due to the fact that two Semitic languages, Arabic and Hebrew, share a common core of linguistic features, together with favorable contextual factors and instructional factors.

from #Audiology via ola Kala on Inoreader http://ift.tt/2blZj2H
via IFTTT

A smartphone-based architecture to detect and quantify freezing of gait in Parkinson’s disease

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Marianna Capecci, Lucia Pepa, Federica Verdini, Maria Gabriella Ceravolo
IntroductionThe freezing of gait (FOG) is a common and highly distressing motor symptom in patients with Parkinson’s Disease (PD). Effective management of FOG is difficult given its episodic nature, heterogeneous manifestation and limited responsiveness to drug treatment.MethodsIn order to verify the acceptance of a smartphone-based architecture and its reliability at detecting FOG in real-time, we studied 20 patients suffering from PD-related FOG. They were asked to perform video-recorded Timed Up and Go (TUG) test with and without dual-tasks while wearing the smartphone. Video and accelerometer recordings were synchronized in order to assess the reliability of the FOG detection system as compared to the judgement of the clinicians assessing the videos. The architecture uses two different algorithms, one applying the Freezing and Energy Index (Moore-Bächlin Algorithm), and the other adding information about step cadence, to algorithm 1.ResultsA total 98 FOG events were recognized by clinicians based on video recordings, while only 7 FOG events were missed by the application. Sensitivity and specificity were 70.1% and 84.1%, respectively, for the Moore-Bächlin Algorithm, rising to 87.57% and 94.97%, respectively, for algorithm 2 (McNemar value=28.42; p=0.0073).ConclusionResults confirm previous data on the reliability of Moore-Bächlin Algorithm, while indicating that the evolution of this architecture can identify FOG episodes with higher sensitivity and specificity. An acceptable, reliable and easy-to-implement FOG detection system can support a better quantification of the phenomenon and hence provide data useful to ascertain the efficacy of therapeutic approaches.



from #Audiology via ola Kala on Inoreader http://ift.tt/2bivRck
via IFTTT

Modulation of lower limb muscle activity induced by curved walking in typically developing children

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): R. Gross, F. Leboeuf, M. Lempereur, T. Michel, B. Perrouin-Verbe, S. Vieilledent, O. Rémy-Néris




from #Audiology via ola Kala on Inoreader http://ift.tt/2bkMr7Y
via IFTTT

Gait event detection in laboratory and real life settings: Accuracy of ankle and waist sensor based methods

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Fabio A. Storm, Christopher J. Buckley, Claudia Mazzà
Wearable sensors technology based on inertial measurement units (IMUs) is leading the transition from laboratory-based gait analysis, to daily life gait monitoring. However, the validity of IMU-based methods for the detection of gait events has only been tested in laboratory settings, which may not reproduce real life walking patterns. The aim of this study was to evaluate the accuracy of two algorithms for the detection of gait events and temporal parameters during free-living walking, one based on two shank-worn inertial sensors, and the other based on one waist-worn sensor. The algorithms were applied to gait data of ten healthy subjects walking both indoor and outdoor, and completing protocols that entailed both straight supervised and free walking in an urban environment. The values obtained from the inertial sensors were compared to pressure insoles data. The shank-based method showed very accurate initial contact, stride time and step time estimation (<14ms error). Accuracy of final contact timings and stance time was lower (28–51ms error range). The error of temporal parameter variability estimates was in the range 0.09–0.89%. The waist method failed to detect about 1% of the total steps and performed worse than the shank method, but the temporal parameter estimation was still satisfactory. Both methods showed negligible differences in their accuracy when the different experimental conditions were compared, which suggests their applicability in the analysis of free-living gait.



from #Audiology via ola Kala on Inoreader http://ift.tt/2bioP4h
via IFTTT

A smartphone-based architecture to detect and quantify freezing of gait in Parkinson’s disease

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Marianna Capecci, Lucia Pepa, Federica Verdini, Maria Gabriella Ceravolo
IntroductionThe freezing of gait (FOG) is a common and highly distressing motor symptom in patients with Parkinson’s Disease (PD). Effective management of FOG is difficult given its episodic nature, heterogeneous manifestation and limited responsiveness to drug treatment.MethodsIn order to verify the acceptance of a smartphone-based architecture and its reliability at detecting FOG in real-time, we studied 20 patients suffering from PD-related FOG. They were asked to perform video-recorded Timed Up and Go (TUG) test with and without dual-tasks while wearing the smartphone. Video and accelerometer recordings were synchronized in order to assess the reliability of the FOG detection system as compared to the judgement of the clinicians assessing the videos. The architecture uses two different algorithms, one applying the Freezing and Energy Index (Moore-Bächlin Algorithm), and the other adding information about step cadence, to algorithm 1.ResultsA total 98 FOG events were recognized by clinicians based on video recordings, while only 7 FOG events were missed by the application. Sensitivity and specificity were 70.1% and 84.1%, respectively, for the Moore-Bächlin Algorithm, rising to 87.57% and 94.97%, respectively, for algorithm 2 (McNemar value=28.42; p=0.0073).ConclusionResults confirm previous data on the reliability of Moore-Bächlin Algorithm, while indicating that the evolution of this architecture can identify FOG episodes with higher sensitivity and specificity. An acceptable, reliable and easy-to-implement FOG detection system can support a better quantification of the phenomenon and hence provide data useful to ascertain the efficacy of therapeutic approaches.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bivRck
via IFTTT

Modulation of lower limb muscle activity induced by curved walking in typically developing children

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): R. Gross, F. Leboeuf, M. Lempereur, T. Michel, B. Perrouin-Verbe, S. Vieilledent, O. Rémy-Néris




from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bkMr7Y
via IFTTT

Gait event detection in laboratory and real life settings: Accuracy of ankle and waist sensor based methods

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Fabio A. Storm, Christopher J. Buckley, Claudia Mazzà
Wearable sensors technology based on inertial measurement units (IMUs) is leading the transition from laboratory-based gait analysis, to daily life gait monitoring. However, the validity of IMU-based methods for the detection of gait events has only been tested in laboratory settings, which may not reproduce real life walking patterns. The aim of this study was to evaluate the accuracy of two algorithms for the detection of gait events and temporal parameters during free-living walking, one based on two shank-worn inertial sensors, and the other based on one waist-worn sensor. The algorithms were applied to gait data of ten healthy subjects walking both indoor and outdoor, and completing protocols that entailed both straight supervised and free walking in an urban environment. The values obtained from the inertial sensors were compared to pressure insoles data. The shank-based method showed very accurate initial contact, stride time and step time estimation (<14ms error). Accuracy of final contact timings and stance time was lower (28–51ms error range). The error of temporal parameter variability estimates was in the range 0.09–0.89%. The waist method failed to detect about 1% of the total steps and performed worse than the shank method, but the temporal parameter estimation was still satisfactory. Both methods showed negligible differences in their accuracy when the different experimental conditions were compared, which suggests their applicability in the analysis of free-living gait.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bioP4h
via IFTTT

Gender Identification Using High-Frequency Speech Energy: Effects of Increasing the Low-Frequency Limit.

wk-health-logo.gif

Objective: The purpose of this study was to investigate the ability of normal-hearing listeners to use high-frequency energy for gender identification from naturally produced speech signals. Design: Two experiments were conducted using a repeated-measures design. Experiment 1 investigated the effects of increasing high-pass filter cutoff (i.e., increasing the low-frequency spectral limit) on gender identification from naturally produced vowel segments. Experiment 2 studied the effects of increasing high-pass filter cutoff on gender identification from naturally produced sentences. Confidence ratings for the gender identification task were also obtained for both experiments. Results: Listeners in experiment 1 were capable of extracting talker gender information at levels significantly above chance from vowel segments high-pass filtered up to 8.5 kHz. Listeners in experiment 2 also performed above chance on the gender identification task from sentences high-pass filtered up to 12 kHz. Conclusions: Cumulatively, the results of both experiments provide evidence that normal-hearing listeners can utilize information from the very high-frequency region (above 4 to 5 kHz) of the speech signal for talker gender identification. These findings are at variance with current assumptions regarding the perceptual information regarding talker gender within this frequency region. The current results also corroborate and extend previous studies of the use of high-frequency speech energy for perceptual tasks. These findings have potential implications for the study of information contained within the high-frequency region of the speech spectrum and the role this region may play in navigating the auditory scene, particularly when the low-frequency portion of the spectrum is masked by environmental noise sources or for listeners with substantial hearing loss in the low-frequency region and better hearing sensitivity in the high-frequency region (i.e., reverse slope hearing loss). Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bhJjjk
via IFTTT

Prospective Study of Gastroesophageal Reflux, Use of Proton Pump Inhibitors and H2-Receptor Antagonists, and Risk of Hearing Loss.

wk-health-logo.gif

Objectives: Gastroesophageal reflux disease (GERD) is common and often treated with proton pump inhibitors (PPIs) or H2-receptor antagonists (H2-RAs). GERD has been associated with exposure of the middle ear to gastric contents, which could cause hearing loss. Treatment of GERD with PPIs and H2-RAs may decrease exposure of the middle ear to gastric acid and decrease the risk of hearing loss. We prospectively investigated the relation between GERD, use of PPIs and H2-RAs, and the risk of hearing loss in 54,883 women in Nurses' Health Study II. Design: Eligible participants, aged 41 to 58 years in 2005, provided information on medication use and GERD symptoms in 2005, answered the question on hearing loss in 2009 or in 2013, and did not report hearing loss starting before the date of onset of GERD symptoms or medication use. The primary outcome was self-reported hearing loss. Cox proportional hazards regression was used to adjust for potential confounders. Results: During 361,872 person-years of follow-up, 9842 new cases of hearing loss were reported. Compared with no GERD symptoms, higher frequency of GERD symptoms was associated with higher risk of hearing loss (multivariable adjusted relative risks:

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bidpNU
via IFTTT

The Benefits of Increased Sensation Level and Bandwidth for Spatial Release From Masking.

wk-health-logo.gif

Objective: Spatial release from masking (SRM) can increase speech intelligibility in complex listening environments. The goal of the present study was to document how speech-in-speech stimuli could be best processed to encourage optimum SRM for listeners who represent a range of ages and amounts of hearing loss. We examined the effects of equating stimulus audibility among listeners, presenting stimuli at uniform sensation levels (SLs), and filtering stimuli at two separate bandwidths. Design: Seventy-one participants completed two speech intelligibility experiments (36 listeners in experiment 1; all 71 in experiment 2) in which a target phrase from the coordinate response measure (CRM) and two masking phrases from the CRM were presented simultaneously via earphones using a virtual spatial array, such that the target sentence was always at 0 degree azimuth angle and the maskers were either colocated or positioned at +/-45 degrees. Experiments 1 and 2 examined the impacts of SL, age, and hearing loss on SRM. Experiment 2 also assessed the effects of stimulus bandwidth on SRM. Results: Overall, listeners' ability to achieve SRM improved with increased SL. Younger listeners with less hearing loss achieved more SRM than older or hearing-impaired listeners. It was hypothesized that SL and bandwidth would result in dissociable effects on SRM. However, acoustical analysis revealed that effective audible bandwidth, defined as the highest frequency at which the stimulus was audible at both ears, was the best predictor of performance. Thus, increasing SL seemed to improve SRM by increasing the effective bandwidth rather than increasing the level of already audible components. Conclusions: Performance for all listeners, regardless of age or hearing loss, improved with an increase in overall SL and/or bandwidth, but the improvement was small relative to the benefits of spatial separation. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bhJaMK
via IFTTT

The Acoustics of Word-Initial Fricatives and Their Effect on Word-Level Intelligibility in Children With Bilateral Cochlear Implants.

wk-health-logo.gif

Objectives: Previous research has found that relative to their peers with normal hearing (NH), children with cochlear implants (CIs) produce the sibilant fricatives /s/ and /[integral]/ less accurately and with less subphonemic acoustic contrast. The present study sought to further investigate these differences across groups in two ways. First, subphonemic acoustic properties were investigated in terms of dynamic acoustic features that indexed more than just the contrast between /s/ and /[integral]/. Second, the authors investigated whether such differences in subphonemic acoustic contrast between sibilant fricatives affected the intelligibility of sibilant-initial single word productions by children with CIs and their peers with NH. Design: In experiment 1, productions of /s/ and /[integral]/ in word-initial prevocalic contexts were elicited from 22 children with bilateral CIs (aged 4 to 7 years) who had at least 2 years of CI experience and from 22 chronological age-matched peers with NH. Acoustic features were measured from 17 points across the fricatives: peak frequency was measured to index the place of articulation contrast; spectral variance and amplitude drop were measured to index the degree of sibilance. These acoustic trajectories were fitted with growth-curve models to analyze time-varying spectral change. In experiment 2, phonemically accurate word productions that were elicited in experiment 1 were embedded within four-talker babble and played to 80 adult listeners with NH. Listeners were asked to repeat the words, and their accuracy rate was used as a measure of the intelligibility of the word productions. Regression analyses were run to test which acoustic properties measured in experiment 1 predicted the intelligibility scores from experiment 2. Results: The peak frequency trajectories indicated that the children with CIs produced less acoustic contrast between /s/ and /[integral]/. Group differences were observed in terms of the dynamic aspects (i.e., the trajectory shapes) of the acoustic properties. In the productions by children with CIs, the peak frequency and the amplitude drop trajectories were shallower, and the spectral variance trajectories were more asymmetric, exhibiting greater increases in variance (i.e., reduced sibilance) near the fricative-vowel boundary. The listeners' responses to the word productions indicated that when produced by children with CIs, /[integral]/-initial words were significantly more intelligible than /s/-initial words. However, when produced by children with NH, /s/-initial words and /[integral]/-initial words were equally intelligible. Intelligibility was partially predicted from the acoustic properties (Cox & Snell pseudo-R2 > 0.190), and the significant predictors were predominantly dynamic, rather than static, ones. Conclusions: Productions from children with CIs differed from those produced by age-matched NH controls in terms of their subphonemic acoustic properties. The intelligibility of sibilant-initial single-word productions by children with CIs is sensitive to the place of articulation of the initial consonant (/[integral]/-initial words were more intelligible than /s/-initial words), but productions by children with NH were equally intelligible across both places of articulation. Therefore, children with CIs still exhibit differential production abilities for sibilant fricatives at an age when their NH peers do not. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bidBNa
via IFTTT

A smartphone-based architecture to detect and quantify freezing of gait in Parkinson’s disease

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Marianna Capecci, Lucia Pepa, Federica Verdini, Maria Gabriella Ceravolo
IntroductionThe freezing of gait (FOG) is a common and highly distressing motor symptom in patients with Parkinson’s Disease (PD). Effective management of FOG is difficult given its episodic nature, heterogeneous manifestation and limited responsiveness to drug treatment.MethodsIn order to verify the acceptance of a smartphone-based architecture and its reliability at detecting FOG in real-time, we studied 20 patients suffering from PD-related FOG. They were asked to perform video-recorded Timed Up and Go (TUG) test with and without dual-tasks while wearing the smartphone. Video and accelerometer recordings were synchronized in order to assess the reliability of the FOG detection system as compared to the judgement of the clinicians assessing the videos. The architecture uses two different algorithms, one applying the Freezing and Energy Index (Moore-Bächlin Algorithm), and the other adding information about step cadence, to algorithm 1.ResultsA total 98 FOG events were recognized by clinicians based on video recordings, while only 7 FOG events were missed by the application. Sensitivity and specificity were 70.1% and 84.1%, respectively, for the Moore-Bächlin Algorithm, rising to 87.57% and 94.97%, respectively, for algorithm 2 (McNemar value=28.42; p=0.0073).ConclusionResults confirm previous data on the reliability of Moore-Bächlin Algorithm, while indicating that the evolution of this architecture can identify FOG episodes with higher sensitivity and specificity. An acceptable, reliable and easy-to-implement FOG detection system can support a better quantification of the phenomenon and hence provide data useful to ascertain the efficacy of therapeutic approaches.



from #Audiology via ola Kala on Inoreader http://ift.tt/2bivRck
via IFTTT

Modulation of lower limb muscle activity induced by curved walking in typically developing children

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): R. Gross, F. Leboeuf, M. Lempereur, T. Michel, B. Perrouin-Verbe, S. Vieilledent, O. Rémy-Néris




from #Audiology via ola Kala on Inoreader http://ift.tt/2bkMr7Y
via IFTTT

Gait event detection in laboratory and real life settings: Accuracy of ankle and waist sensor based methods

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Fabio A. Storm, Christopher J. Buckley, Claudia Mazzà
Wearable sensors technology based on inertial measurement units (IMUs) is leading the transition from laboratory-based gait analysis, to daily life gait monitoring. However, the validity of IMU-based methods for the detection of gait events has only been tested in laboratory settings, which may not reproduce real life walking patterns. The aim of this study was to evaluate the accuracy of two algorithms for the detection of gait events and temporal parameters during free-living walking, one based on two shank-worn inertial sensors, and the other based on one waist-worn sensor. The algorithms were applied to gait data of ten healthy subjects walking both indoor and outdoor, and completing protocols that entailed both straight supervised and free walking in an urban environment. The values obtained from the inertial sensors were compared to pressure insoles data. The shank-based method showed very accurate initial contact, stride time and step time estimation (<14ms error). Accuracy of final contact timings and stance time was lower (28–51ms error range). The error of temporal parameter variability estimates was in the range 0.09–0.89%. The waist method failed to detect about 1% of the total steps and performed worse than the shank method, but the temporal parameter estimation was still satisfactory. Both methods showed negligible differences in their accuracy when the different experimental conditions were compared, which suggests their applicability in the analysis of free-living gait.



from #Audiology via ola Kala on Inoreader http://ift.tt/2bioP4h
via IFTTT

Indices of Effortful Listening Can Be Mined from Existing Electroencephalographic Data.

wk-health-logo.gif

Objectives: Studies suggest that theta (~4 to 7 Hz), alpha (~8 to 12 Hz), and stimulus-evoked dynamics of the electroencephalogram index effortful listening. Numerous auditory event-related potential datasets exist, without thorough examination of these features. The feasibility of mining those datasets for such features is assessed here. Design: In a standard auditory-oddball paradigm, 12 listeners heard deviant high-frequency tones (10%) interspersed among low-frequency tones (90%) "near" or "far" separated in frequency. Results: During active listening (deviance detection; experiment 1), sustained frontal midline theta power, and gamma-band inter-trial phase coherence, were greater for the near condition. No significant "near"/"far" differences were observable during passive exposure to the same sounds (experiment 2). Conclusions: Increased theta power likely reflects increased utilization of cognitive-control processes (e.g., working memory) that rely on frontal cortical networks. Inter-trial phase coherence differences may reflect differences in attention-modulated stimulus encoding. Reanalysis of existing datasets can usefully inform future work on listening effort. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bid0ey
via IFTTT

Neonate Auditory Brainstem Responses to CE-Chirp and CE-Chirp Octave Band Stimuli II: Versus Adult Auditory Brainstem Responses.

wk-health-logo.gif

Objectives: The purpose of the study was to examine the differences in auditory brainstem response (ABR) latency and amplitude indices to the CE-Chirp stimuli in neonates versus young adults as a function of stimulus level, rate, polarity, frequency and gender. Design: Participants were 168 healthy neonates and 20 normal-hearing young adults. ABRs were obtained to air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps. The effect of stimulus level was also examined with bone-conducted CE-Chirps and CE-Chirp octave band stimuli. The effect of gender was examined across all stimulus manipulations. Results: In general, ABR wave V amplitudes were significantly larger (p 0.05). Conclusions: Significant differences in ABR latencies and amplitudes exist between newborns and young adults using CE-Chirp stimuli. These differences are consistent with differences to traditional click and tone burst stimuli and reflect maturational differences as a function of age. These findings continue to emphasize the importance of interpreting ABR results using age-based normative data. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bhJsDd
via IFTTT

Screening, Education, and Rehabilitation Services for Hearing Loss Provided to Clients with Low Vision: Measured and Perceived Value Among Participants of the Vision-Hearing Project.

wk-health-logo.gif

Objectives: Combined vision and hearing impairment, termed dual sensory impairment (DSI), is associated with poorer health outcomes compared with a single sensory loss alone. Separate systems of care exist for visual and hearing impairment which potentially limit the effectiveness of managing DSI. To address this, a Hearing Screening Education Model (HSEM) was offered to older adults attending a low-vision clinic in Australia within this pilot study. The present study aimed to evaluate the benefits of seeking help on hearing handicap, self-perceived health, and use of community services among those identified with unmet hearing needs after participation in the HSEM. Design: Of 210 older adults (>55 years of age) who completed the HSEM and were referred for follow-up, 169 returned for a follow-up interview at least 12 months later. Of these, 68 (40.2%) sought help, and the majority were seen by a hearing healthcare provider (89.7%). Changes in hearing handicap, quality of life, and reliance on community services between the baseline and 12-month follow-up were compared between those who sought help and those who did not. In addition, the perceived value of the HSEM was assessed. Results: Results showed that there was no significant difference in hearing handicap between those who sought help (mean change -1.02 SD = 7.97, p = 0.3) and those who did not (mean change 0.94 SD = 7.68, p = 0.3), p = 0.18. The mental component of the SF-36 worsened significantly between baseline and follow-up measures across the whole group (mean change -2.49 SD = 9.98, p = 0.002). This was largely driven by those not seeking help, rather than those seeking help, but was not significantly different between the two groups. Those who sought help showed a significant reduction in the use of community services compared with those who did not. Further, all participants positively viewed the HSEM's underlying principle of greater integration between vision and hearing services. Conclusions: These findings suggest a need to further develop and evaluate integrated models of healthcare for older adults with DSI. It also highlights the importance of using broader measures of benefit, other than use of hearing aids to evaluate outcomes of hearing healthcare programs. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bid8KL
via IFTTT

Age-Related Differences in Listening Effort During Degraded Speech Recognition.

wk-health-logo.gif

Objectives: The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design: Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Results: Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions: OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bhJUBF
via IFTTT

Overlap and Nonoverlap Between the ICF Core Sets for Hearing Loss and Otology and Audiology Intake Documentation.

wk-health-logo.gif

Objectives: The International Classification of Functioning Disability and Health (ICF) core sets for hearing loss (HL) were developed to serve as a standard for the assessment and reporting of the functioning and health of patients with HL. The aim of the present study was to compare the content of the intake documentation currently used in secondary and tertiary hearing care settings in the Netherlands with the content of the ICF core sets for HL. Research questions were (1) to what extent are the ICF core sets for HL represented in the Dutch Otology and Audiology intake documentation? (2) are there any extra ICF categories expressed in the intake documentation that are currently not part of the ICF core sets for HL, or constructs expressed that are not part of the ICF? Design: Multicenter patient record study including 176 adult patients from two secondary, and two tertiary hearing care settings. The intake documentation was selected from anonymized patient records. The content was linked to the appropriate ICF category from the whole ICF classification using established linking rules. The extent to which the ICF core sets for HL were represented in the intake documentation was determined by assessing the overlap between the ICF categories in the core sets and the list of unique ICF categories extracted from the intake documentation. Any extra constructs that were expressed in the intake documentation but are not part of the core sets were described as well, differentiating between ICF categories that are not part of the core sets and constructs that are not part of the ICF classification. Results: In total, otology and audiology intake documentation represented 24 of the 27 brief ICF core set categories (i.e., 89%), and 60 of the 117 comprehensive ICF core set categories (i.e., 51%). Various ICF core sets categories were not represented, including higher mental functions (body functions), civic life aspects (activities and participation), and support and attitudes of family (environmental factors). One extra ICF category emerged from the intake documentation that is currently not included in the core sets: sleep functions. Various personal factors emerged from the intake documentation that are currently not defined in the ICF classification. Conclusions: The results showed substantial overlap between the ICF core sets for HL and the intake documentation of otology and audiology, but also revealed areas of nonoverlap. These findings contribute to the evaluation of the content validity of the core sets. The overlap can be viewed as supportive of the core sets' content validity. The nonoverlap in core set categories indicates that current Dutch intake procedures may not cover all aspects relevant to patients with ear/hearing problems. The identification of extra constructs suggests that the core sets may not include all areas of functioning that are relevant to Dutch Otology and Audiology patients. Consideration of incorporating both aspects into future intake practice deserves attention. Operationalization of the ICF core set categories, including the extra constructs identified in this study into a practical and integral intake instrument seems an important next step. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bie6XC
via IFTTT

Missing Data in the Field of Otorhinolaryngology and Head & Neck Surgery: Need for Improvement.

wk-health-logo.gif

Objective: Clinical studies are often facing missing data. Data can be missing for various reasons, for example, patients moved, certain measurements are only administered in high-risk groups, and patients are unable to attend clinic because of their health status. There are various ways to handle these missing data (e.g., complete cases analyses, mean substitution). Each of these techniques potentially influences both the analyses and the results of a study. The first aim of this structured review was to analyze how often researchers in the field of otorhinolaryngology/head & neck surgery report missing data. The second aim was to systematically describe how researchers handle missing data in their analyses. The third aim was to provide a solution on how to deal with missing data by means of the multiple imputation technique. With this review, we aim to contribute to a higher quality of reporting in otorhinolaryngology research. Design: Clinical studies among the 398 most recently published research articles in three major journals in the field of otorhinolaryngology/head & neck surgery were analyzed based on how researchers reported and handled missing data. Results: Of the 316 clinical studies, 85 studies reported some form of missing data. Of those 85, only a small number (12 studies, 3.8%) actively handled the missingness in their data. The majority of researchers exclude incomplete cases, which results in biased outcomes and a drop in statistical power. Conclusions: Within otorhinolaryngology research, missing data are largely ignored and underreported, and consequently, handled inadequately. This has major impact on the results and conclusions drawn from this research. Based on the outcomes of this review, we provide solutions on how to deal with missing data. To illustrate, we clarify the use of multiple imputation techniques, which recently became widely available in standard statistical programs. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bhJsmH
via IFTTT

Neonate Auditory Brainstem Responses to CE-Chirp and CE-Chirp Octave Band Stimuli I Versus Click and Tone Burst Stimuli.

wk-health-logo.gif

Objectives: The purpose of the study was to generate normative auditory brainstem response (ABR) wave component peak latency and amplitude values for neonates with air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli (i.e., 500, 1000, 2000, and 4000 Hz). A second objective was to compare neonate ABRs to CE-Chirp stimuli with ABR responses to traditional click and tone burst stimuli with the same stimulus parameters. Design: Participants were 168 healthy neonates. ABRs were obtained to air- and bone-conducted CE-Chirp and click stimuli and air-conducted CE-Chirp octave band and tone burst stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps and clicks. The effect of stimulus level was also examined with bone-conducted CE-Chirps and clicks and air-conducted CE-Chirp octave band stimuli. Results: In general, ABR wave V amplitudes to air- and bone-conducted CE-Chirp stimuli were significantly larger (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bidL7b
via IFTTT

Prevalence of Hearing Loss Among a Representative Sample of Canadian Children and Adolescents, 3 to 19 Years of Age.

wk-health-logo.gif

Objectives: There are no nationally representative hearing loss (HL) prevalence data available for Canadian youth using direct measurements. The present study objectives were to estimate national prevalence of HL using audiometric pure-tone thresholds (0.5 to 8 kHz) and or distortion product otoacoustic emissions (DPOAEs) for children and adolescents, aged 3 to 19 years. Design: This cross-sectional population-based study presents findings from the 2012/2013 Canadian Health Measures Survey, entailing an in-person household interview and hearing measurements conducted in a mobile examination clinic. The initial study sample included 2591 participants, aged 3 to 19 years, representing 6.5 million Canadians (3.3 million males). After exclusions, subsamples consisted of 2434 participants, aged 3 to 19 years and 1879 participants, aged 6 to 19 years, with valid audiometric results. Eligible participants underwent otoscopic examination, tympanometry, DPOAE, and audiometry. HL was defined as a pure-tone average >20 dB for 6- to 18-year olds and >=26 dB for 19-year olds, for one or more of the following: four-frequency (0.5, 1, 2, and 4 kHz) pure-tone average, high-frequency (3, 4, 6, and 8 kHz) pure-tone average, and low-frequency (0.5, 1, and 2 kHz) pure-tone average. Mild HL was defined as >20 to 40 dB (6- to 18-year olds) and >=26 to 40 dB (19-year olds). Moderate or worse HL was defined as >40 dB (6- to 19-year olds). HL in 3- to 5-year olds (n = 555) was defined as absent DPOAEs as audiometry was not conducted. Self-reported HL was evaluated using the Health Utilities Index Mark 3 hearing questions. Results: The primary study outcome indicates that 7.7% of Canadian youth, aged 6 to 19, had any HL, for one or more pure-tone average. Four-frequency pure-tone average and high-frequency pure-tone average HL prevalence was 4.7 and 6.0%, respectively, whereas 5.8% had a low-frequency pure-tone average HL. Significantly more children/adolescents had unilateral HL. Mild HL was significantly more common than moderate or worse HL for each pure-tone average. Among Canadians, aged 6 to 19, less than 2.2% had sensorineural HL. Among Canadians, aged 3 to 19, less than 3.5% had conductive HL. Absent DPOAEs were found in 7.1E% of 3- to 5-year olds, and in 3.4E% of 6- to 19-year olds. Among participants eligible for the hearing evaluation and excluding missing data cases (n = 2575), 17.0% had excessive or impacted pus/wax in one or both ears. Self-reported HL in Canadians, aged 6 to 19, was 0.6 E% and 65.3% (aged 3 to 19) reported never having had their hearing tested. E indicates that a high sampling variability is associated with the estimate (coefficient of variation between 16.6% and 33.3%) and should be interpreted with caution. Conclusions: This study provides the first estimates of audiometrically measured HL prevalence among Canadian children and adolescents. A larger proportion of youth have measured HL than was previously reported using self-report surveys, indicating that screening using self-report or proxy may not be effective in identifying individuals with mild HL. Results may underestimate the true prevalence of HL due to the large number excluded and the presentation of impacted or excessive earwax or pus, precluding an accurate or complete hearing evaluation. The majority of 3- to 5-year olds with absent DPOAEs likely had conductive HL. Nonetheless, this type of HL which can be asymptomatic, may become permanent if left untreated. Future research will benefit from analyses, which includes the slight HL category, for which there is growing support, and from studies that identify factors contributing to HL in this population. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bhJB9L
via IFTTT

Neural Correlates of Selective Attention With Hearing Aid Use Followed by ReadMyQuips Auditory Training Program.

wk-health-logo.gif

Objectives: The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Design: Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. Results: After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Conclusions: Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bidTUv
via IFTTT

Incidence of Pediatric Superior Semicircular Canal Dehiscence and Inner Ear Anomalies: A Large Multicenter Review.

Objective: To determine the pediatric incidence and association of superior semicircular canal dehiscence (SSCD) with inner ear (IE) anomalies. Study Design: Retrospective chart review. Setting: Two tertiary referral centers. Patients: Children less than 18 years who received a 0.5 mm or less collimated computed tomography study including the temporal bones between 2010 and 2013 for reasons including, but not limited to, hearing loss, trauma, and infection. Interventions: Images were reformatted into Poschl and Stenver planes. Five hundred three computed tomography studies (1,006 temporal bones) were reviewed by experienced, blinded neuroradiologists. Main Outcome Measures: Incidence of SSCD and IE anomalies. Patient age, sex, and diagnosis were recorded. Statistical analysis was performed to compare outcome measures among patient demographics. Results: The incidence of SSCD was 6.2% (31/503) and an IE anomaly was 15.1% (76/503) of individuals. The incidence of SSCD with an IE anomaly was not significantly correlated (1.1%, 40/1,006; p = 0.23; LR = +1.29). The mean age of children with SSCD was lower (5.9 versus 9.8 yr; p = 0.002). SSCD incidence decreased with age (ages

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2c9t6wx
via IFTTT

Timing and Impact of Hearing Healthcare in Adult Cochlear Implant Recipients: A Rural-Urban Comparison.

Objective: The purpose of this study is to compare the timing and impact of hearing healthcare of rural and urban adults with severe hearing loss who use cochlear implants (CI). Study Design: Cross-sectional questionnaire study. Setting: Tertiary referral center. Patients: Adult cochlear implant recipients. Main Outcome Measures: Data collected included county of residence, socioeconomic information, impact of hearing loss on education/employment, and timing of hearing loss treatment. The benefits obtained from cochlear implantation were also evaluated. Results: There were 91 participants (32 from urban counties, 26 from moderately rural counties, and 33 for extremely rural counties). Rural participants have a longer commute time to the CI center (p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bR3eCq
via IFTTT

Epidemiology of Persistent Tympanic Membrane Perforations Subsequent to Tympanostomy Tubes Assessed With Real World Data.

Objective: To quantify the incidence of persistent tympanic membrane perforation (TMP) after tympanostomy tube (TT) surgery using a large population-based cohort. Study Design: A retrospective cohort study. Setting: Medicaid claims data from 1999 to 2006. Patients: We studied healthy children who had Medicaid eligibility within 6 months of birth that received TTs and had 2 to 7 years of follow-up. Main Outcome Measures: We operationalized persistent TMP by a charge for tympanoplasty and/or one or more diagnoses of TMP. Results: We identified 47,724 children who received TTs and had >=2 years eligibility. The incidence of persistent TMP varied, based on definition and follow-up. The 2 and 7-year TMP rates were: 0.38% and 3.81% for two TMP diagnoses 6 months apart or tympanoplasty; 0.26% and 2.94% for two TMP diagnoses 6 months apart; 0.13% and 1.73% for tympanoplasty, alone; 0.04% and 1.21% for tympanoplasty preceded by one TMP diagnosis; and 0.01% and 0.52% for tympanoplasty and two TMP diagnoses 6 months apart. Reinserting TTs was associated with an increased likelihood of persistent TMP (adjusted hazard ratio [HR] = 1.98, 95% CI 1.49-2.63). Each year increase in age was associated with 49% increase in the risk of persistent TMP. Conclusion: Billing claims data may be used to assess the rate of persistent TMP after TT placement in large populations, yielding results consistent with findings from cohort studies and meta-analyses. Our findings may serve as the basis for future TMP research using real world datasets. Copyright (C) 2016 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2c9tbQV
via IFTTT

Suggestions for a Guideline for Cochlear Implantation in CHARGE Syndrome.

Objective: Identifying aspects for establishing cochlear implantation guidelines for patients with ocular coloboma, heart defects, atresia of the choanae, retardation (of growth and/or of development), genital anomalies, and ear anomalies (CHARGE) syndrome (CS). Study Design: Explorative retrospective study. Setting: Cochlear implant (CI)-centers of tertiary referral centers in the Netherlands. Patients: Ten patients with CS who received a CI between 2002 and 2012. Interventions: Describing the challenges and benefits of cochlear implantation in CS. Main Outcome Measures: Imaging and surgical findings, language development, and Quality-of-life (QoL), compared with two control groups: 1) 34 non-syndromic CI-users and 2) 13 patients with CS without CI because of sufficient hearing. Results: Subjective and objective audiometry and magnetic resonance imaging were necessary to confirm the presence of the cochlear nerve. Surgery in CS was challenging because of enlarged emissary veins, semi-circular-canal aplasia, aberrant facial nerve, and dysplastic cochlear windows, making computed tomography indispensable in surgical preparations. No major intraoperative complications occurred. Despite additional handicaps, all patients showed auditory benefit and improvement in disease-specific QoL. Patients implanted at a relatively young age (5 years) and with minor additional problems, developed spoken language at a basic level comparable to that of the control group of CS patients. Conclusion: A CI should be considered in all patients with CS and severe sensorineural hearing loss. A careful work-up is required, comprising computed tomography, magnetic resonance imaging, objective, and subjective audiometry and assessment by a specialized multidisciplinary team. Cochlear implantation in CS might be complicated by syndrome-related temporal-bone anatomy, and the outcome of the CI is more individually determined. Early implantation should be aimed for. Copyright (C) 2016 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bR3uRQ
via IFTTT

Gender Identification Using High-Frequency Speech Energy: Effects of Increasing the Low-Frequency Limit.

wk-health-logo.gif

Objective: The purpose of this study was to investigate the ability of normal-hearing listeners to use high-frequency energy for gender identification from naturally produced speech signals. Design: Two experiments were conducted using a repeated-measures design. Experiment 1 investigated the effects of increasing high-pass filter cutoff (i.e., increasing the low-frequency spectral limit) on gender identification from naturally produced vowel segments. Experiment 2 studied the effects of increasing high-pass filter cutoff on gender identification from naturally produced sentences. Confidence ratings for the gender identification task were also obtained for both experiments. Results: Listeners in experiment 1 were capable of extracting talker gender information at levels significantly above chance from vowel segments high-pass filtered up to 8.5 kHz. Listeners in experiment 2 also performed above chance on the gender identification task from sentences high-pass filtered up to 12 kHz. Conclusions: Cumulatively, the results of both experiments provide evidence that normal-hearing listeners can utilize information from the very high-frequency region (above 4 to 5 kHz) of the speech signal for talker gender identification. These findings are at variance with current assumptions regarding the perceptual information regarding talker gender within this frequency region. The current results also corroborate and extend previous studies of the use of high-frequency speech energy for perceptual tasks. These findings have potential implications for the study of information contained within the high-frequency region of the speech spectrum and the role this region may play in navigating the auditory scene, particularly when the low-frequency portion of the spectrum is masked by environmental noise sources or for listeners with substantial hearing loss in the low-frequency region and better hearing sensitivity in the high-frequency region (i.e., reverse slope hearing loss). Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhJjjk
via IFTTT

Prospective Study of Gastroesophageal Reflux, Use of Proton Pump Inhibitors and H2-Receptor Antagonists, and Risk of Hearing Loss.

wk-health-logo.gif

Objectives: Gastroesophageal reflux disease (GERD) is common and often treated with proton pump inhibitors (PPIs) or H2-receptor antagonists (H2-RAs). GERD has been associated with exposure of the middle ear to gastric contents, which could cause hearing loss. Treatment of GERD with PPIs and H2-RAs may decrease exposure of the middle ear to gastric acid and decrease the risk of hearing loss. We prospectively investigated the relation between GERD, use of PPIs and H2-RAs, and the risk of hearing loss in 54,883 women in Nurses' Health Study II. Design: Eligible participants, aged 41 to 58 years in 2005, provided information on medication use and GERD symptoms in 2005, answered the question on hearing loss in 2009 or in 2013, and did not report hearing loss starting before the date of onset of GERD symptoms or medication use. The primary outcome was self-reported hearing loss. Cox proportional hazards regression was used to adjust for potential confounders. Results: During 361,872 person-years of follow-up, 9842 new cases of hearing loss were reported. Compared with no GERD symptoms, higher frequency of GERD symptoms was associated with higher risk of hearing loss (multivariable adjusted relative risks:

from #Audiology via ola Kala on Inoreader http://ift.tt/2bidpNU
via IFTTT

The Benefits of Increased Sensation Level and Bandwidth for Spatial Release From Masking.

wk-health-logo.gif

Objective: Spatial release from masking (SRM) can increase speech intelligibility in complex listening environments. The goal of the present study was to document how speech-in-speech stimuli could be best processed to encourage optimum SRM for listeners who represent a range of ages and amounts of hearing loss. We examined the effects of equating stimulus audibility among listeners, presenting stimuli at uniform sensation levels (SLs), and filtering stimuli at two separate bandwidths. Design: Seventy-one participants completed two speech intelligibility experiments (36 listeners in experiment 1; all 71 in experiment 2) in which a target phrase from the coordinate response measure (CRM) and two masking phrases from the CRM were presented simultaneously via earphones using a virtual spatial array, such that the target sentence was always at 0 degree azimuth angle and the maskers were either colocated or positioned at +/-45 degrees. Experiments 1 and 2 examined the impacts of SL, age, and hearing loss on SRM. Experiment 2 also assessed the effects of stimulus bandwidth on SRM. Results: Overall, listeners' ability to achieve SRM improved with increased SL. Younger listeners with less hearing loss achieved more SRM than older or hearing-impaired listeners. It was hypothesized that SL and bandwidth would result in dissociable effects on SRM. However, acoustical analysis revealed that effective audible bandwidth, defined as the highest frequency at which the stimulus was audible at both ears, was the best predictor of performance. Thus, increasing SL seemed to improve SRM by increasing the effective bandwidth rather than increasing the level of already audible components. Conclusions: Performance for all listeners, regardless of age or hearing loss, improved with an increase in overall SL and/or bandwidth, but the improvement was small relative to the benefits of spatial separation. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhJaMK
via IFTTT

The Acoustics of Word-Initial Fricatives and Their Effect on Word-Level Intelligibility in Children With Bilateral Cochlear Implants.

wk-health-logo.gif

Objectives: Previous research has found that relative to their peers with normal hearing (NH), children with cochlear implants (CIs) produce the sibilant fricatives /s/ and /[integral]/ less accurately and with less subphonemic acoustic contrast. The present study sought to further investigate these differences across groups in two ways. First, subphonemic acoustic properties were investigated in terms of dynamic acoustic features that indexed more than just the contrast between /s/ and /[integral]/. Second, the authors investigated whether such differences in subphonemic acoustic contrast between sibilant fricatives affected the intelligibility of sibilant-initial single word productions by children with CIs and their peers with NH. Design: In experiment 1, productions of /s/ and /[integral]/ in word-initial prevocalic contexts were elicited from 22 children with bilateral CIs (aged 4 to 7 years) who had at least 2 years of CI experience and from 22 chronological age-matched peers with NH. Acoustic features were measured from 17 points across the fricatives: peak frequency was measured to index the place of articulation contrast; spectral variance and amplitude drop were measured to index the degree of sibilance. These acoustic trajectories were fitted with growth-curve models to analyze time-varying spectral change. In experiment 2, phonemically accurate word productions that were elicited in experiment 1 were embedded within four-talker babble and played to 80 adult listeners with NH. Listeners were asked to repeat the words, and their accuracy rate was used as a measure of the intelligibility of the word productions. Regression analyses were run to test which acoustic properties measured in experiment 1 predicted the intelligibility scores from experiment 2. Results: The peak frequency trajectories indicated that the children with CIs produced less acoustic contrast between /s/ and /[integral]/. Group differences were observed in terms of the dynamic aspects (i.e., the trajectory shapes) of the acoustic properties. In the productions by children with CIs, the peak frequency and the amplitude drop trajectories were shallower, and the spectral variance trajectories were more asymmetric, exhibiting greater increases in variance (i.e., reduced sibilance) near the fricative-vowel boundary. The listeners' responses to the word productions indicated that when produced by children with CIs, /[integral]/-initial words were significantly more intelligible than /s/-initial words. However, when produced by children with NH, /s/-initial words and /[integral]/-initial words were equally intelligible. Intelligibility was partially predicted from the acoustic properties (Cox & Snell pseudo-R2 > 0.190), and the significant predictors were predominantly dynamic, rather than static, ones. Conclusions: Productions from children with CIs differed from those produced by age-matched NH controls in terms of their subphonemic acoustic properties. The intelligibility of sibilant-initial single-word productions by children with CIs is sensitive to the place of articulation of the initial consonant (/[integral]/-initial words were more intelligible than /s/-initial words), but productions by children with NH were equally intelligible across both places of articulation. Therefore, children with CIs still exhibit differential production abilities for sibilant fricatives at an age when their NH peers do not. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bidBNa
via IFTTT

Effects of Implantation and Reimplantation of Cochlear Implant Electrodes in an In Vivo Animal Experimental Model (Macaca fascicularis).

wk-health-logo.gif

Objectives: The objectives of this study were to evaluate the effect of reimplanting a cochlear implant electrode in animal normal-hearing cochlea to propose measures that may prevent cochlear injury and, given its close phylogenetic proximity to humans, to evaluate the macaque as a model for electroacoustic stimulation. Design: Simultaneous, bilateral surgical procedures in a group of 5 normal-hearing specimens (Macaca fascicularis) took place in a total of 10 ears. Periodic bilateral auditory testing (distortion product otoacoustic emissions and auditory brainstem evoked responses [ABR]) took place during a 6-month follow-up period. Subsequently, unilateral explantation and reimplantation was performed. Auditory follow-up continued up to 12 months, after which animals were sacrificed and both temporal bones extracted for histological analysis. Results: Implantation and reimplantation surgeries were performed without complications in 9 of 10 cases. Full insertion depth was achieved at reimplantation in four of five ears. Auditory evaluation: Statistically significant differences between implanted and reimplanted were observed for the frequencies 2000 and 11,000 Hz, the remaining frequencies showed no differences for distortion product otoacoustic emission. Before the procedure, average thresholds with click-stimuli ABR of the five animals were 40 dB SPL (implanted group) and 40 dB SPL (reimplanted group). One week after first implantation, average thresholds were 55 dB SPL and 60 dB, respectively. After 12 months of follow-up, the average thresholds were 72.5 dB SPL (implanted group) and 65 dB SPL (reimplanted group). Hearing loss appeared during the first weeks after the first implantation and no deterioration was observed thereafter. Differences for ABR under click stimulus were not significant between the two ear groups. Similar results were observed with tone-burst ABR. A 15 dB shift was observed for the implanted group preoperatively versus 1-week post surgery and an additional 17.5 dB shift was seen after 12-month follow-up. For the reimplanted group, a 20 dB shift was observed within the first week post reimplantation surgery and an additional 5 dB after 6 months follow-up. Statistical analysis revealed significant differences between the implanted and reimplanted ear groups for frequencies 4000 Hz (p = 0.034), 12000 Hz (p = 0.031), and 16,000 Hz (p = 0.031). The histological analysis revealed that the electrode insertion was minimally traumatic for the cochlea, mainly indicating rupture of the basilar membrane in the transition area between the basal turn and the first cochlear turn only in Mf1 left ear. Conclusions: With application of minimally traumatic surgical techniques, it is possible to maintain high rates of hearing preservation after implantation and even after reimplantation. Partial impairment of auditory thresholds may occur during the first weeks after surgery, which remains stable. Considering the tonotopic distribution of the cochlea, we found a correlation between the histological lesions sites and the auditory findings, suggesting that a rupture of the basilar membrane may impact hearing levels. The macaque was observed to be a functionally and anatomically an excellent animal model for cochlear implantation. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhINlv
via IFTTT

Indices of Effortful Listening Can Be Mined from Existing Electroencephalographic Data.

wk-health-logo.gif

Objectives: Studies suggest that theta (~4 to 7 Hz), alpha (~8 to 12 Hz), and stimulus-evoked dynamics of the electroencephalogram index effortful listening. Numerous auditory event-related potential datasets exist, without thorough examination of these features. The feasibility of mining those datasets for such features is assessed here. Design: In a standard auditory-oddball paradigm, 12 listeners heard deviant high-frequency tones (10%) interspersed among low-frequency tones (90%) "near" or "far" separated in frequency. Results: During active listening (deviance detection; experiment 1), sustained frontal midline theta power, and gamma-band inter-trial phase coherence, were greater for the near condition. No significant "near"/"far" differences were observable during passive exposure to the same sounds (experiment 2). Conclusions: Increased theta power likely reflects increased utilization of cognitive-control processes (e.g., working memory) that rely on frontal cortical networks. Inter-trial phase coherence differences may reflect differences in attention-modulated stimulus encoding. Reanalysis of existing datasets can usefully inform future work on listening effort. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bid0ey
via IFTTT

Neonate Auditory Brainstem Responses to CE-Chirp and CE-Chirp Octave Band Stimuli II: Versus Adult Auditory Brainstem Responses.

wk-health-logo.gif

Objectives: The purpose of the study was to examine the differences in auditory brainstem response (ABR) latency and amplitude indices to the CE-Chirp stimuli in neonates versus young adults as a function of stimulus level, rate, polarity, frequency and gender. Design: Participants were 168 healthy neonates and 20 normal-hearing young adults. ABRs were obtained to air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps. The effect of stimulus level was also examined with bone-conducted CE-Chirps and CE-Chirp octave band stimuli. The effect of gender was examined across all stimulus manipulations. Results: In general, ABR wave V amplitudes were significantly larger (p 0.05). Conclusions: Significant differences in ABR latencies and amplitudes exist between newborns and young adults using CE-Chirp stimuli. These differences are consistent with differences to traditional click and tone burst stimuli and reflect maturational differences as a function of age. These findings continue to emphasize the importance of interpreting ABR results using age-based normative data. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhJsDd
via IFTTT

Screening, Education, and Rehabilitation Services for Hearing Loss Provided to Clients with Low Vision: Measured and Perceived Value Among Participants of the Vision-Hearing Project.

wk-health-logo.gif

Objectives: Combined vision and hearing impairment, termed dual sensory impairment (DSI), is associated with poorer health outcomes compared with a single sensory loss alone. Separate systems of care exist for visual and hearing impairment which potentially limit the effectiveness of managing DSI. To address this, a Hearing Screening Education Model (HSEM) was offered to older adults attending a low-vision clinic in Australia within this pilot study. The present study aimed to evaluate the benefits of seeking help on hearing handicap, self-perceived health, and use of community services among those identified with unmet hearing needs after participation in the HSEM. Design: Of 210 older adults (>55 years of age) who completed the HSEM and were referred for follow-up, 169 returned for a follow-up interview at least 12 months later. Of these, 68 (40.2%) sought help, and the majority were seen by a hearing healthcare provider (89.7%). Changes in hearing handicap, quality of life, and reliance on community services between the baseline and 12-month follow-up were compared between those who sought help and those who did not. In addition, the perceived value of the HSEM was assessed. Results: Results showed that there was no significant difference in hearing handicap between those who sought help (mean change -1.02 SD = 7.97, p = 0.3) and those who did not (mean change 0.94 SD = 7.68, p = 0.3), p = 0.18. The mental component of the SF-36 worsened significantly between baseline and follow-up measures across the whole group (mean change -2.49 SD = 9.98, p = 0.002). This was largely driven by those not seeking help, rather than those seeking help, but was not significantly different between the two groups. Those who sought help showed a significant reduction in the use of community services compared with those who did not. Further, all participants positively viewed the HSEM's underlying principle of greater integration between vision and hearing services. Conclusions: These findings suggest a need to further develop and evaluate integrated models of healthcare for older adults with DSI. It also highlights the importance of using broader measures of benefit, other than use of hearing aids to evaluate outcomes of hearing healthcare programs. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bid8KL
via IFTTT

Age-Related Differences in Listening Effort During Degraded Speech Recognition.

wk-health-logo.gif

Objectives: The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design: Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Results: Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions: OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhJUBF
via IFTTT

Overlap and Nonoverlap Between the ICF Core Sets for Hearing Loss and Otology and Audiology Intake Documentation.

wk-health-logo.gif

Objectives: The International Classification of Functioning Disability and Health (ICF) core sets for hearing loss (HL) were developed to serve as a standard for the assessment and reporting of the functioning and health of patients with HL. The aim of the present study was to compare the content of the intake documentation currently used in secondary and tertiary hearing care settings in the Netherlands with the content of the ICF core sets for HL. Research questions were (1) to what extent are the ICF core sets for HL represented in the Dutch Otology and Audiology intake documentation? (2) are there any extra ICF categories expressed in the intake documentation that are currently not part of the ICF core sets for HL, or constructs expressed that are not part of the ICF? Design: Multicenter patient record study including 176 adult patients from two secondary, and two tertiary hearing care settings. The intake documentation was selected from anonymized patient records. The content was linked to the appropriate ICF category from the whole ICF classification using established linking rules. The extent to which the ICF core sets for HL were represented in the intake documentation was determined by assessing the overlap between the ICF categories in the core sets and the list of unique ICF categories extracted from the intake documentation. Any extra constructs that were expressed in the intake documentation but are not part of the core sets were described as well, differentiating between ICF categories that are not part of the core sets and constructs that are not part of the ICF classification. Results: In total, otology and audiology intake documentation represented 24 of the 27 brief ICF core set categories (i.e., 89%), and 60 of the 117 comprehensive ICF core set categories (i.e., 51%). Various ICF core sets categories were not represented, including higher mental functions (body functions), civic life aspects (activities and participation), and support and attitudes of family (environmental factors). One extra ICF category emerged from the intake documentation that is currently not included in the core sets: sleep functions. Various personal factors emerged from the intake documentation that are currently not defined in the ICF classification. Conclusions: The results showed substantial overlap between the ICF core sets for HL and the intake documentation of otology and audiology, but also revealed areas of nonoverlap. These findings contribute to the evaluation of the content validity of the core sets. The overlap can be viewed as supportive of the core sets' content validity. The nonoverlap in core set categories indicates that current Dutch intake procedures may not cover all aspects relevant to patients with ear/hearing problems. The identification of extra constructs suggests that the core sets may not include all areas of functioning that are relevant to Dutch Otology and Audiology patients. Consideration of incorporating both aspects into future intake practice deserves attention. Operationalization of the ICF core set categories, including the extra constructs identified in this study into a practical and integral intake instrument seems an important next step. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bie6XC
via IFTTT

Missing Data in the Field of Otorhinolaryngology and Head & Neck Surgery: Need for Improvement.

wk-health-logo.gif

Objective: Clinical studies are often facing missing data. Data can be missing for various reasons, for example, patients moved, certain measurements are only administered in high-risk groups, and patients are unable to attend clinic because of their health status. There are various ways to handle these missing data (e.g., complete cases analyses, mean substitution). Each of these techniques potentially influences both the analyses and the results of a study. The first aim of this structured review was to analyze how often researchers in the field of otorhinolaryngology/head & neck surgery report missing data. The second aim was to systematically describe how researchers handle missing data in their analyses. The third aim was to provide a solution on how to deal with missing data by means of the multiple imputation technique. With this review, we aim to contribute to a higher quality of reporting in otorhinolaryngology research. Design: Clinical studies among the 398 most recently published research articles in three major journals in the field of otorhinolaryngology/head & neck surgery were analyzed based on how researchers reported and handled missing data. Results: Of the 316 clinical studies, 85 studies reported some form of missing data. Of those 85, only a small number (12 studies, 3.8%) actively handled the missingness in their data. The majority of researchers exclude incomplete cases, which results in biased outcomes and a drop in statistical power. Conclusions: Within otorhinolaryngology research, missing data are largely ignored and underreported, and consequently, handled inadequately. This has major impact on the results and conclusions drawn from this research. Based on the outcomes of this review, we provide solutions on how to deal with missing data. To illustrate, we clarify the use of multiple imputation techniques, which recently became widely available in standard statistical programs. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhJsmH
via IFTTT

Neonate Auditory Brainstem Responses to CE-Chirp and CE-Chirp Octave Band Stimuli I Versus Click and Tone Burst Stimuli.

wk-health-logo.gif

Objectives: The purpose of the study was to generate normative auditory brainstem response (ABR) wave component peak latency and amplitude values for neonates with air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli (i.e., 500, 1000, 2000, and 4000 Hz). A second objective was to compare neonate ABRs to CE-Chirp stimuli with ABR responses to traditional click and tone burst stimuli with the same stimulus parameters. Design: Participants were 168 healthy neonates. ABRs were obtained to air- and bone-conducted CE-Chirp and click stimuli and air-conducted CE-Chirp octave band and tone burst stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps and clicks. The effect of stimulus level was also examined with bone-conducted CE-Chirps and clicks and air-conducted CE-Chirp octave band stimuli. Results: In general, ABR wave V amplitudes to air- and bone-conducted CE-Chirp stimuli were significantly larger (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2bidL7b
via IFTTT

Prevalence of Hearing Loss Among a Representative Sample of Canadian Children and Adolescents, 3 to 19 Years of Age.

wk-health-logo.gif

Objectives: There are no nationally representative hearing loss (HL) prevalence data available for Canadian youth using direct measurements. The present study objectives were to estimate national prevalence of HL using audiometric pure-tone thresholds (0.5 to 8 kHz) and or distortion product otoacoustic emissions (DPOAEs) for children and adolescents, aged 3 to 19 years. Design: This cross-sectional population-based study presents findings from the 2012/2013 Canadian Health Measures Survey, entailing an in-person household interview and hearing measurements conducted in a mobile examination clinic. The initial study sample included 2591 participants, aged 3 to 19 years, representing 6.5 million Canadians (3.3 million males). After exclusions, subsamples consisted of 2434 participants, aged 3 to 19 years and 1879 participants, aged 6 to 19 years, with valid audiometric results. Eligible participants underwent otoscopic examination, tympanometry, DPOAE, and audiometry. HL was defined as a pure-tone average >20 dB for 6- to 18-year olds and >=26 dB for 19-year olds, for one or more of the following: four-frequency (0.5, 1, 2, and 4 kHz) pure-tone average, high-frequency (3, 4, 6, and 8 kHz) pure-tone average, and low-frequency (0.5, 1, and 2 kHz) pure-tone average. Mild HL was defined as >20 to 40 dB (6- to 18-year olds) and >=26 to 40 dB (19-year olds). Moderate or worse HL was defined as >40 dB (6- to 19-year olds). HL in 3- to 5-year olds (n = 555) was defined as absent DPOAEs as audiometry was not conducted. Self-reported HL was evaluated using the Health Utilities Index Mark 3 hearing questions. Results: The primary study outcome indicates that 7.7% of Canadian youth, aged 6 to 19, had any HL, for one or more pure-tone average. Four-frequency pure-tone average and high-frequency pure-tone average HL prevalence was 4.7 and 6.0%, respectively, whereas 5.8% had a low-frequency pure-tone average HL. Significantly more children/adolescents had unilateral HL. Mild HL was significantly more common than moderate or worse HL for each pure-tone average. Among Canadians, aged 6 to 19, less than 2.2% had sensorineural HL. Among Canadians, aged 3 to 19, less than 3.5% had conductive HL. Absent DPOAEs were found in 7.1E% of 3- to 5-year olds, and in 3.4E% of 6- to 19-year olds. Among participants eligible for the hearing evaluation and excluding missing data cases (n = 2575), 17.0% had excessive or impacted pus/wax in one or both ears. Self-reported HL in Canadians, aged 6 to 19, was 0.6 E% and 65.3% (aged 3 to 19) reported never having had their hearing tested. E indicates that a high sampling variability is associated with the estimate (coefficient of variation between 16.6% and 33.3%) and should be interpreted with caution. Conclusions: This study provides the first estimates of audiometrically measured HL prevalence among Canadian children and adolescents. A larger proportion of youth have measured HL than was previously reported using self-report surveys, indicating that screening using self-report or proxy may not be effective in identifying individuals with mild HL. Results may underestimate the true prevalence of HL due to the large number excluded and the presentation of impacted or excessive earwax or pus, precluding an accurate or complete hearing evaluation. The majority of 3- to 5-year olds with absent DPOAEs likely had conductive HL. Nonetheless, this type of HL which can be asymptomatic, may become permanent if left untreated. Future research will benefit from analyses, which includes the slight HL category, for which there is growing support, and from studies that identify factors contributing to HL in this population. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bhJB9L
via IFTTT

Neural Correlates of Selective Attention With Hearing Aid Use Followed by ReadMyQuips Auditory Training Program.

wk-health-logo.gif

Objectives: The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Design: Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. Results: After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Conclusions: Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task. Copyright (C) 2016 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bidTUv
via IFTTT