Πέμπτη 14 Ιουλίου 2016

Assessing the Reliability and Use of the Expository Scoring Scheme as a Measure of Developmental Change in Monolingual English and Bilingual French/English Children

Purpose
The Expository Scoring Scheme (ESS) is designed to analyze the macrostructure of descriptions of a favorite game or sport. This pilot study examined inter- and intrarater reliability of the ESS and use of the scale to capture developmental change in elementary school children.
Method
Twenty-four children in 2 language groups (monolingual English and bilingual French/English) and 2 age groups (7–8 years, 11–12 years) participated (6 in each subgroup). Participants orally explained how to play their favorite game or sport in English. Expository discourse samples were rated for 10 macrostructure components using the ESS. Ratings were summed for a total score.
Results
Inter- and intrarater reliability was high for the total ESS score and for some but not all ESS components. In addition, the total score and ratings for many ESS components increased with age. Few differences were found in use of macrostructure components across language groups.
Conclusions
The ESS captures developmental change in the use of expository macrostructure in spoken discourse samples. It may be beneficial to take into account the lower reliability found for ratings of some ESS components in clinical practice. Due to the small sample size, these results should be considered preliminary and interpreted with caution.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29GSFEz
via IFTTT

Assessing the Reliability and Use of the Expository Scoring Scheme as a Measure of Developmental Change in Monolingual English and Bilingual French/English Children

Purpose
The Expository Scoring Scheme (ESS) is designed to analyze the macrostructure of descriptions of a favorite game or sport. This pilot study examined inter- and intrarater reliability of the ESS and use of the scale to capture developmental change in elementary school children.
Method
Twenty-four children in 2 language groups (monolingual English and bilingual French/English) and 2 age groups (7–8 years, 11–12 years) participated (6 in each subgroup). Participants orally explained how to play their favorite game or sport in English. Expository discourse samples were rated for 10 macrostructure components using the ESS. Ratings were summed for a total score.
Results
Inter- and intrarater reliability was high for the total ESS score and for some but not all ESS components. In addition, the total score and ratings for many ESS components increased with age. Few differences were found in use of macrostructure components across language groups.
Conclusions
The ESS captures developmental change in the use of expository macrostructure in spoken discourse samples. It may be beneficial to take into account the lower reliability found for ratings of some ESS components in clinical practice. Due to the small sample size, these results should be considered preliminary and interpreted with caution.

from #Audiology via ola Kala on Inoreader http://ift.tt/29GSFEz
via IFTTT

Assessing the Reliability and Use of the Expository Scoring Scheme as a Measure of Developmental Change in Monolingual English and Bilingual French/English Children

Purpose
The Expository Scoring Scheme (ESS) is designed to analyze the macrostructure of descriptions of a favorite game or sport. This pilot study examined inter- and intrarater reliability of the ESS and use of the scale to capture developmental change in elementary school children.
Method
Twenty-four children in 2 language groups (monolingual English and bilingual French/English) and 2 age groups (7–8 years, 11–12 years) participated (6 in each subgroup). Participants orally explained how to play their favorite game or sport in English. Expository discourse samples were rated for 10 macrostructure components using the ESS. Ratings were summed for a total score.
Results
Inter- and intrarater reliability was high for the total ESS score and for some but not all ESS components. In addition, the total score and ratings for many ESS components increased with age. Few differences were found in use of macrostructure components across language groups.
Conclusions
The ESS captures developmental change in the use of expository macrostructure in spoken discourse samples. It may be beneficial to take into account the lower reliability found for ratings of some ESS components in clinical practice. Due to the small sample size, these results should be considered preliminary and interpreted with caution.

from #Audiology via ola Kala on Inoreader http://ift.tt/29GSFEz
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Int J Audiol. 2016 Sep;55(9):483-490

Authors: Rothpletz AM, Moore AN, Preminger JE

Abstract
OBJECTIVE: This study measured help-seeking readiness and acceptance of existing internet-based hearing healthcare (IHHC) websites among a group of older adults who failed a hearing screening (Phase 1). It also explored the effects of brief training on participants' acceptance of IHHC (Phase 2).
STUDY SAMPLE: Twenty-seven adults (age 55+) who failed a hearing screening participated.
DESIGN: During Phase 1 participants were administered the University of Rhode Island Change Assessment (URICA) and patient technology acceptance model (PTAM) Questionnaire. During Phase 2 participants were randomly assigned to a training or control group. Training group participants attended an instructional class on existing IHHC websites. The control group received no training. The PTAM questionnaire was re-administered to both groups 4-6 weeks following the initial assessment.
RESULTS: The majority of participants were either considering or preparing to do something about their hearing loss, and were generally accepting of IHHC websites (Phase 1). The participants who underwent brief IHHC training reported increases in hearing healthcare knowledge and slight improvements in computer self-efficacy (Phase 2).
CONCLUSIONS: Older adults who fail hearing screenings may be good candidates for IHHC. The incorporation of a simple user-interface and short-term training may optimize the usability of future IHHC programs for this population.

PMID: 27409278 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/29FGHF0
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Int J Audiol. 2016 Sep;55(9):483-490

Authors: Rothpletz AM, Moore AN, Preminger JE

Abstract
OBJECTIVE: This study measured help-seeking readiness and acceptance of existing internet-based hearing healthcare (IHHC) websites among a group of older adults who failed a hearing screening (Phase 1). It also explored the effects of brief training on participants' acceptance of IHHC (Phase 2).
STUDY SAMPLE: Twenty-seven adults (age 55+) who failed a hearing screening participated.
DESIGN: During Phase 1 participants were administered the University of Rhode Island Change Assessment (URICA) and patient technology acceptance model (PTAM) Questionnaire. During Phase 2 participants were randomly assigned to a training or control group. Training group participants attended an instructional class on existing IHHC websites. The control group received no training. The PTAM questionnaire was re-administered to both groups 4-6 weeks following the initial assessment.
RESULTS: The majority of participants were either considering or preparing to do something about their hearing loss, and were generally accepting of IHHC websites (Phase 1). The participants who underwent brief IHHC training reported increases in hearing healthcare knowledge and slight improvements in computer self-efficacy (Phase 2).
CONCLUSIONS: Older adults who fail hearing screenings may be good candidates for IHHC. The incorporation of a simple user-interface and short-term training may optimize the usability of future IHHC programs for this population.

PMID: 27409278 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/29FGHF0
via IFTTT

JAAA CEU Program.

JAAA CEU Program.

J Am Acad Audiol. 2016 Jul;27(7):612-3

Authors:

PMID: 27406666 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29yKBo1
via IFTTT

A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

J Am Acad Audiol. 2016 Jul;27(7):601-11

Authors: Weller T, Best V, Buchholz JM, Young T

Abstract
BACKGROUND: Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest.
PURPOSE: The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario.
RESEARCH DESIGN: This study was a descriptive case control study.
STUDY SAMPLE: Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL.
DATA COLLECTION AND ANALYSIS: A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction.
RESULTS: Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss.
CONCLUSIONS: This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests.

PMID: 27406665 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29A88Bj
via IFTTT

The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location.

The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location.

J Am Acad Audiol. 2016 Jul;27(7):588-600

Authors: Brimijoin WO, Akeroyd MA

Abstract
BACKGROUND: There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids.
PURPOSE: To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues.
RESEARCH DESIGN: We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids.
STUDY SAMPLE: We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment.
DATA COLLECTION AND ANALYSIS: Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided.
RESULTS: Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front.
CONCLUSIONS: Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.

PMID: 27406664 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29yKI2N
via IFTTT

Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

J Am Acad Audiol. 2016 Jul;27(7):567-87

Authors: Lau ST, Pichora-Fuller MK, Li KZ, Singh G, Campos JL

Abstract
BACKGROUND: Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges.
PURPOSE: The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking.
RESEARCH DESIGN: In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth).
STUDY SAMPLE: The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments.
DATA COLLECTION AND ANALYSIS: Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within-subject factors. In analyses of the 60% listening condition, spatial expectation (likely versus unlikely) was a within-subject factor. Differences between groups in age and baseline measures of hearing, mobility, and cognition were tested using t tests.
RESULTS: The NH group had significantly better word recognition accuracy than the HL group. Both groups performed better when the probability was higher and the target location more likely. For word recognition, dual-task costs for the HL group did not depend on condition, whereas the NH group demonstrated a surprising dual-task benefit in conditions with lower probability or spatial expectation. For the kinematic parameters, both groups demonstrated a more upright and less variable head position and more variable trunk position during dual-task conditions compared to the walking-only condition, suggesting that safe walking was prioritized. The HL group demonstrated more overall stride time variability than the NH group.
CONCLUSIONS: This study provides new knowledge about the effects of ARHL, HA use, and aging on word recognition when individuals also perform a mobility-related task that is typically experienced in everyday life. This research may help inform the development of more effective function-based approaches to assessment and intervention for people who are hard-of-hearing.

PMID: 27406663 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TtjjM
via IFTTT

JAAA CEU Program.

JAAA CEU Program.

J Am Acad Audiol. 2016 Jul;27(7):612-3

Authors:

PMID: 27406666 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29yKBo1
via IFTTT

A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

J Am Acad Audiol. 2016 Jul;27(7):601-11

Authors: Weller T, Best V, Buchholz JM, Young T

Abstract
BACKGROUND: Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest.
PURPOSE: The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario.
RESEARCH DESIGN: This study was a descriptive case control study.
STUDY SAMPLE: Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL.
DATA COLLECTION AND ANALYSIS: A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction.
RESULTS: Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss.
CONCLUSIONS: This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests.

PMID: 27406665 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29A88Bj
via IFTTT

The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location.

The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location.

J Am Acad Audiol. 2016 Jul;27(7):588-600

Authors: Brimijoin WO, Akeroyd MA

Abstract
BACKGROUND: There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids.
PURPOSE: To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues.
RESEARCH DESIGN: We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids.
STUDY SAMPLE: We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment.
DATA COLLECTION AND ANALYSIS: Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided.
RESULTS: Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front.
CONCLUSIONS: Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.

PMID: 27406664 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29yKI2N
via IFTTT

Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

J Am Acad Audiol. 2016 Jul;27(7):567-87

Authors: Lau ST, Pichora-Fuller MK, Li KZ, Singh G, Campos JL

Abstract
BACKGROUND: Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges.
PURPOSE: The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking.
RESEARCH DESIGN: In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth).
STUDY SAMPLE: The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments.
DATA COLLECTION AND ANALYSIS: Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within-subject factors. In analyses of the 60% listening condition, spatial expectation (likely versus unlikely) was a within-subject factor. Differences between groups in age and baseline measures of hearing, mobility, and cognition were tested using t tests.
RESULTS: The NH group had significantly better word recognition accuracy than the HL group. Both groups performed better when the probability was higher and the target location more likely. For word recognition, dual-task costs for the HL group did not depend on condition, whereas the NH group demonstrated a surprising dual-task benefit in conditions with lower probability or spatial expectation. For the kinematic parameters, both groups demonstrated a more upright and less variable head position and more variable trunk position during dual-task conditions compared to the walking-only condition, suggesting that safe walking was prioritized. The HL group demonstrated more overall stride time variability than the NH group.
CONCLUSIONS: This study provides new knowledge about the effects of ARHL, HA use, and aging on word recognition when individuals also perform a mobility-related task that is typically experienced in everyday life. This research may help inform the development of more effective function-based approaches to assessment and intervention for people who are hard-of-hearing.

PMID: 27406663 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TtjjM
via IFTTT

Spatial Acoustic Scenarios in Multichannel Loudspeaker Systems for Hearing Aid Evaluation.

Spatial Acoustic Scenarios in Multichannel Loudspeaker Systems for Hearing Aid Evaluation.

J Am Acad Audiol. 2016 Jul;27(7):557-66

Authors: Grimm G, Kollmeier B, Hohmann V

Abstract
BACKGROUND: Field tests and guided walks in real environments show that the benefit from hearing aid (HA) signal processing in real-life situations is typically lower than the predicted benefit found in laboratory studies. This suggests that laboratory test outcome measures are poor predictors of real-life HA benefits. However, a systematic evaluation of algorithms in the field is difficult due to the lack of reproducibility and control of the test conditions. Virtual acoustic environments that simulate real-life situations may allow for a systematic and reproducible evaluation of HAs under more realistic conditions, thus providing a better estimate of real-life benefit than established laboratory tests.
PURPOSE: To quantify the difference in HA performance between a laboratory condition and more realistic conditions based on technical performance measures using virtual acoustic environments, and to identify the factors affecting HA performance across the tested environments.
RESEARCH DESIGN: A set of typical HA beamformer algorithms was evaluated in virtual acoustic environments of different complexity. Performance was assessed based on established technical performance measures, including perceptual model predictions of speech quality and speech intelligibility. Virtual acoustic environments ranged from a simple static reference condition to more realistic complex scenes with dynamically moving sound objects.
RESULTS: HA benefit, as predicted by signal-to-noise ratio (SNR) and speech intelligibility measures, differs between the reference condition and more realistic conditions for the tested beamformer algorithms. Other performance measures, such as speech quality or binaural degree of diffusiveness, do not show pronounced differences. However, a decreased speech quality was found in specific conditions. A correlation analysis showed a significant correlation between room acoustic parameters of the sound field and HA performance. The SNR improvement in the reference condition was found to be a poor predictor of HA performance in terms of speech intelligibility improvement in the more realistic conditions.
CONCLUSIONS: Using several virtual acoustic environments of different complexity, a systematic difference in HA performance between a simple reference condition and more realistic environments was found, which may be related to the discrepancy between laboratory and real-life HA performance reported previously.

PMID: 27406662 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TVfGO
via IFTTT

Evaluation of Loudspeaker-Based Virtual Sound Environments for Testing Directional Hearing Aids.

Evaluation of Loudspeaker-Based Virtual Sound Environments for Testing Directional Hearing Aids.

J Am Acad Audiol. 2016 Jul;27(7):541-56

Authors: Oreinos C, Buchholz JM

Abstract
BACKGROUND: Assessments of hearing aid (HA) benefits in the laboratory often do not accurately reflect real-life experience. This may be improved by employing loudspeaker-based virtual sound environments (VSEs) that provide more realistic acoustic scenarios. It is unclear how far the limited accuracy of these VSEs influences measures of subjective performance.
PURPOSE: Verify two common methods for creating VSEs that are to be used for assessing HA outcomes.
RESEARCH DESIGN: A cocktail-party scene was created inside a meeting room and then reproduced with a 41-channel loudspeaker array inside an anechoic chamber. The reproduced scenes were created either by using room acoustic modeling techniques or microphone array recordings.
STUDY SAMPLE: Participants were 18 listeners with a symmetrical, sloping, mild-to-moderate hearing loss, aged between 66 and 78 yr (mean = 73.8 yr).
DATA COLLECTION AND ANALYSIS: The accuracy of the two VSEs was assessed by comparing the subjective performance measured with two-directional HA algorithms inside all three acoustic environments. The performance was evaluated by using a speech intelligibility test and an acceptable noise level task.
RESULTS: The general behavior of the subjective performance seen in the real environment was preserved in the two VSEs for both directional HA algorithms. However, the estimated directional benefits were slightly reduced in the model-based VSE, and further reduced in the recording-based VSE.
CONCLUSIONS: It can be concluded that the considered VSEs can be used for testing directional HAs, but the provided sensitivity is reduced when compared to a real environment. This can result in an underestimation of the provided directional benefit. However, this minor limitation may be easily outweighed by the high realism of the acoustic scenes that these VSEs can generate, which may result in HA outcome measures with a significantly higher ecological relevance than provided by measures commonly performed in the laboratory or clinic.

PMID: 27406661 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TtQ5h
via IFTTT

Common Sound Scenarios: A Context-Driven Categorization of Everyday Sound Environments for Application in Hearing-Device Research.

Common Sound Scenarios: A Context-Driven Categorization of Everyday Sound Environments for Application in Hearing-Device Research.

J Am Acad Audiol. 2016 Jul;27(7):527-40

Authors: Wolters F, Smeds K, Schmidt E, Christensen EK, Norup C

Abstract
BACKGROUND: Evaluation of hearing-device signal-processing features is performed for research and development purposes, but also in clinical settings. Most people agree that the benefit experienced in a hearing-device user's daily life is most important, but laboratory tests are popular since they can be performed uniformly for all participants in a study using sensitive outcome measures. In order to design laboratory tests that have the potential of indicating real-life benefit, there is a need for more information about the acoustic environments and listening situations encountered by hearing-device users as well as by normal-hearing people.
PURPOSE: To investigate the acoustic environments and listening situations people encounter, and to provide a structured framework of common sound scenarios (CoSS) that can be used for instance when designing realistic laboratory tests.
RESEARCH DESIGN: A literature search was conducted. Extracted acoustic environments and listening situations were categorized using a context-based approach. A set of common sound scenarios was established based on the findings from the literature.
DATA COLLECTION: A number of publications providing data on encountered acoustic environments and listening situations were identified. Focus was on studies including informants who reported or recorded information in field trials. Nine relevant references were found. In combination with data collected at our laboratory, 187 examples of acoustic environments or listening situations were found.
RESULTS: Based on the extracted data, a categorization approach based on context (intentions and tasks) was used when creating CoSS. Three intention categories, "speech communication," "focused listening," and "nonspecific" were divided into seven task categories. In each task category, two sound scenarios were described, creating in total 14 common sound scenarios. The literature search showed a general lack of studies investigating acoustic environments and listening situations, in particular studies where normal-hearing informants are included and studies performed outside North America and Western Europe.
CONCLUSIONS: A structured framework was developed. Intentions and tasks constitute the main categories in the framework, and 14 common sound scenarios were selected and described. The framework can for instance be used when developing hearing-device signal-processing features, in the evaluation of such features in realistic laboratory tests, and for demonstration of feature effects to hearing-device wearers.

PMID: 27406660 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TV4vi
via IFTTT

A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

J Am Acad Audiol. 2016 Jul;27(7):515-26

Authors: Best V, Keidser G, Freeston K, Buchholz JM

Abstract
BACKGROUND: Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools.
PURPOSE: The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids.
RESEARCH DESIGN: A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study.
STUDY SAMPLE: Thirty listeners with normal hearing participated in this study.
DATA COLLECTION AND ANALYSIS: Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR.
RESULTS: Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score.
CONCLUSIONS: This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit.

PMID: 27406659 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29Ttj3g
via IFTTT

Theoretical Issues of Validity in the Measurement of Aided Speech Reception Threshold in Noise for Comparing Nonlinear Hearing Aid Systems.

Theoretical Issues of Validity in the Measurement of Aided Speech Reception Threshold in Noise for Comparing Nonlinear Hearing Aid Systems.

J Am Acad Audiol. 2016 Jul;27(7):504-14

Authors: Naylor G

Abstract
BACKGROUND: Adaptive Speech Reception Threshold in noise (SRTn) measurements are often used to make comparisons between alternative hearing aid (HA) systems. Such measurements usually do not constrain the signal-to-noise ratio (SNR) at which testing takes place. Meanwhile, HA systems increasingly include nonlinear features that operate differently in different SNRs, and listeners differ in their inherent SNR requirements.
PURPOSE: To show that SRTn measurements, as commonly used in comparisons of alternative HA systems, suffer from threats to their validity, to illustrate these threats with examples of potentially invalid conclusions in the research literature, and to propose ways to tackle these threats.
RESEARCH DESIGN: An examination of the nature of SRTn measurements in the context of test theory, modern nonlinear HAs, and listener diversity.
STUDY SAMPLE, DATA COLLECTION, AND ANALYSIS: Examples from the audiological research literature were used to estimate typical interparticipant variation in SRTn and to illustrate cases where validity may have been compromised.
RESULTS AND CONCLUSIONS: There can be no doubt that SRTn measurements, when used to compare nonlinear HA systems, in principle, suffer from threats to their internal and external/ecological validity. Interactions between HA nonlinearities and SNR, and interparticipant differences in inherent SNR requirements, can act to generate misleading results. In addition, SRTn may lie at an SNR outside the range for which the HA system is designed or expected to operate in. Although the extent of invalid conclusions in the literature is difficult to evaluate, examples of studies were nevertheless identified where the risk of each form of invalidity is significant. Reliable data on ecological SNRs is becoming available, so that ecological validity can be assessed. Methodological developments that can reduce the risk of invalid conclusions include variations on the SRTn measurement procedure itself, manipulations of stimulus or scoring conditions to place SRTn in an ecologically relevant range, and design and analysis approaches that take account of interparticipant differences.

PMID: 27406658 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TVnGB
via IFTTT

Introduction to Special Issue: Towards Ecologically Valid Protocols for the Assessment of Hearing and Hearing Devices.

Introduction to Special Issue: Towards Ecologically Valid Protocols for the Assessment of Hearing and Hearing Devices.

J Am Acad Audiol. 2016 Jul;27(7):502-3

Authors: Keidser G

PMID: 27406657 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TtDif
via IFTTT

Spatial Acoustic Scenarios in Multichannel Loudspeaker Systems for Hearing Aid Evaluation.

Spatial Acoustic Scenarios in Multichannel Loudspeaker Systems for Hearing Aid Evaluation.

J Am Acad Audiol. 2016 Jul;27(7):557-66

Authors: Grimm G, Kollmeier B, Hohmann V

Abstract
BACKGROUND: Field tests and guided walks in real environments show that the benefit from hearing aid (HA) signal processing in real-life situations is typically lower than the predicted benefit found in laboratory studies. This suggests that laboratory test outcome measures are poor predictors of real-life HA benefits. However, a systematic evaluation of algorithms in the field is difficult due to the lack of reproducibility and control of the test conditions. Virtual acoustic environments that simulate real-life situations may allow for a systematic and reproducible evaluation of HAs under more realistic conditions, thus providing a better estimate of real-life benefit than established laboratory tests.
PURPOSE: To quantify the difference in HA performance between a laboratory condition and more realistic conditions based on technical performance measures using virtual acoustic environments, and to identify the factors affecting HA performance across the tested environments.
RESEARCH DESIGN: A set of typical HA beamformer algorithms was evaluated in virtual acoustic environments of different complexity. Performance was assessed based on established technical performance measures, including perceptual model predictions of speech quality and speech intelligibility. Virtual acoustic environments ranged from a simple static reference condition to more realistic complex scenes with dynamically moving sound objects.
RESULTS: HA benefit, as predicted by signal-to-noise ratio (SNR) and speech intelligibility measures, differs between the reference condition and more realistic conditions for the tested beamformer algorithms. Other performance measures, such as speech quality or binaural degree of diffusiveness, do not show pronounced differences. However, a decreased speech quality was found in specific conditions. A correlation analysis showed a significant correlation between room acoustic parameters of the sound field and HA performance. The SNR improvement in the reference condition was found to be a poor predictor of HA performance in terms of speech intelligibility improvement in the more realistic conditions.
CONCLUSIONS: Using several virtual acoustic environments of different complexity, a systematic difference in HA performance between a simple reference condition and more realistic environments was found, which may be related to the discrepancy between laboratory and real-life HA performance reported previously.

PMID: 27406662 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TVfGO
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Int J Audiol. 2016 Sep;55(9):483-490

Authors: Rothpletz AM, Moore AN, Preminger JE

Abstract
OBJECTIVE: This study measured help-seeking readiness and acceptance of existing internet-based hearing healthcare (IHHC) websites among a group of older adults who failed a hearing screening (Phase 1). It also explored the effects of brief training on participants' acceptance of IHHC (Phase 2).
STUDY SAMPLE: Twenty-seven adults (age 55+) who failed a hearing screening participated.
DESIGN: During Phase 1 participants were administered the University of Rhode Island Change Assessment (URICA) and patient technology acceptance model (PTAM) Questionnaire. During Phase 2 participants were randomly assigned to a training or control group. Training group participants attended an instructional class on existing IHHC websites. The control group received no training. The PTAM questionnaire was re-administered to both groups 4-6 weeks following the initial assessment.
RESULTS: The majority of participants were either considering or preparing to do something about their hearing loss, and were generally accepting of IHHC websites (Phase 1). The participants who underwent brief IHHC training reported increases in hearing healthcare knowledge and slight improvements in computer self-efficacy (Phase 2).
CONCLUSIONS: Older adults who fail hearing screenings may be good candidates for IHHC. The incorporation of a simple user-interface and short-term training may optimize the usability of future IHHC programs for this population.

PMID: 27409278 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/29FGHF0
via IFTTT

Evaluation of Loudspeaker-Based Virtual Sound Environments for Testing Directional Hearing Aids.

Evaluation of Loudspeaker-Based Virtual Sound Environments for Testing Directional Hearing Aids.

J Am Acad Audiol. 2016 Jul;27(7):541-56

Authors: Oreinos C, Buchholz JM

Abstract
BACKGROUND: Assessments of hearing aid (HA) benefits in the laboratory often do not accurately reflect real-life experience. This may be improved by employing loudspeaker-based virtual sound environments (VSEs) that provide more realistic acoustic scenarios. It is unclear how far the limited accuracy of these VSEs influences measures of subjective performance.
PURPOSE: Verify two common methods for creating VSEs that are to be used for assessing HA outcomes.
RESEARCH DESIGN: A cocktail-party scene was created inside a meeting room and then reproduced with a 41-channel loudspeaker array inside an anechoic chamber. The reproduced scenes were created either by using room acoustic modeling techniques or microphone array recordings.
STUDY SAMPLE: Participants were 18 listeners with a symmetrical, sloping, mild-to-moderate hearing loss, aged between 66 and 78 yr (mean = 73.8 yr).
DATA COLLECTION AND ANALYSIS: The accuracy of the two VSEs was assessed by comparing the subjective performance measured with two-directional HA algorithms inside all three acoustic environments. The performance was evaluated by using a speech intelligibility test and an acceptable noise level task.
RESULTS: The general behavior of the subjective performance seen in the real environment was preserved in the two VSEs for both directional HA algorithms. However, the estimated directional benefits were slightly reduced in the model-based VSE, and further reduced in the recording-based VSE.
CONCLUSIONS: It can be concluded that the considered VSEs can be used for testing directional HAs, but the provided sensitivity is reduced when compared to a real environment. This can result in an underestimation of the provided directional benefit. However, this minor limitation may be easily outweighed by the high realism of the acoustic scenes that these VSEs can generate, which may result in HA outcome measures with a significantly higher ecological relevance than provided by measures commonly performed in the laboratory or clinic.

PMID: 27406661 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TtQ5h
via IFTTT

Common Sound Scenarios: A Context-Driven Categorization of Everyday Sound Environments for Application in Hearing-Device Research.

Common Sound Scenarios: A Context-Driven Categorization of Everyday Sound Environments for Application in Hearing-Device Research.

J Am Acad Audiol. 2016 Jul;27(7):527-40

Authors: Wolters F, Smeds K, Schmidt E, Christensen EK, Norup C

Abstract
BACKGROUND: Evaluation of hearing-device signal-processing features is performed for research and development purposes, but also in clinical settings. Most people agree that the benefit experienced in a hearing-device user's daily life is most important, but laboratory tests are popular since they can be performed uniformly for all participants in a study using sensitive outcome measures. In order to design laboratory tests that have the potential of indicating real-life benefit, there is a need for more information about the acoustic environments and listening situations encountered by hearing-device users as well as by normal-hearing people.
PURPOSE: To investigate the acoustic environments and listening situations people encounter, and to provide a structured framework of common sound scenarios (CoSS) that can be used for instance when designing realistic laboratory tests.
RESEARCH DESIGN: A literature search was conducted. Extracted acoustic environments and listening situations were categorized using a context-based approach. A set of common sound scenarios was established based on the findings from the literature.
DATA COLLECTION: A number of publications providing data on encountered acoustic environments and listening situations were identified. Focus was on studies including informants who reported or recorded information in field trials. Nine relevant references were found. In combination with data collected at our laboratory, 187 examples of acoustic environments or listening situations were found.
RESULTS: Based on the extracted data, a categorization approach based on context (intentions and tasks) was used when creating CoSS. Three intention categories, "speech communication," "focused listening," and "nonspecific" were divided into seven task categories. In each task category, two sound scenarios were described, creating in total 14 common sound scenarios. The literature search showed a general lack of studies investigating acoustic environments and listening situations, in particular studies where normal-hearing informants are included and studies performed outside North America and Western Europe.
CONCLUSIONS: A structured framework was developed. Intentions and tasks constitute the main categories in the framework, and 14 common sound scenarios were selected and described. The framework can for instance be used when developing hearing-device signal-processing features, in the evaluation of such features in realistic laboratory tests, and for demonstration of feature effects to hearing-device wearers.

PMID: 27406660 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TV4vi
via IFTTT

A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

J Am Acad Audiol. 2016 Jul;27(7):515-26

Authors: Best V, Keidser G, Freeston K, Buchholz JM

Abstract
BACKGROUND: Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools.
PURPOSE: The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids.
RESEARCH DESIGN: A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study.
STUDY SAMPLE: Thirty listeners with normal hearing participated in this study.
DATA COLLECTION AND ANALYSIS: Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR.
RESULTS: Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score.
CONCLUSIONS: This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit.

PMID: 27406659 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29Ttj3g
via IFTTT

Theoretical Issues of Validity in the Measurement of Aided Speech Reception Threshold in Noise for Comparing Nonlinear Hearing Aid Systems.

Theoretical Issues of Validity in the Measurement of Aided Speech Reception Threshold in Noise for Comparing Nonlinear Hearing Aid Systems.

J Am Acad Audiol. 2016 Jul;27(7):504-14

Authors: Naylor G

Abstract
BACKGROUND: Adaptive Speech Reception Threshold in noise (SRTn) measurements are often used to make comparisons between alternative hearing aid (HA) systems. Such measurements usually do not constrain the signal-to-noise ratio (SNR) at which testing takes place. Meanwhile, HA systems increasingly include nonlinear features that operate differently in different SNRs, and listeners differ in their inherent SNR requirements.
PURPOSE: To show that SRTn measurements, as commonly used in comparisons of alternative HA systems, suffer from threats to their validity, to illustrate these threats with examples of potentially invalid conclusions in the research literature, and to propose ways to tackle these threats.
RESEARCH DESIGN: An examination of the nature of SRTn measurements in the context of test theory, modern nonlinear HAs, and listener diversity.
STUDY SAMPLE, DATA COLLECTION, AND ANALYSIS: Examples from the audiological research literature were used to estimate typical interparticipant variation in SRTn and to illustrate cases where validity may have been compromised.
RESULTS AND CONCLUSIONS: There can be no doubt that SRTn measurements, when used to compare nonlinear HA systems, in principle, suffer from threats to their internal and external/ecological validity. Interactions between HA nonlinearities and SNR, and interparticipant differences in inherent SNR requirements, can act to generate misleading results. In addition, SRTn may lie at an SNR outside the range for which the HA system is designed or expected to operate in. Although the extent of invalid conclusions in the literature is difficult to evaluate, examples of studies were nevertheless identified where the risk of each form of invalidity is significant. Reliable data on ecological SNRs is becoming available, so that ecological validity can be assessed. Methodological developments that can reduce the risk of invalid conclusions include variations on the SRTn measurement procedure itself, manipulations of stimulus or scoring conditions to place SRTn in an ecologically relevant range, and design and analysis approaches that take account of interparticipant differences.

PMID: 27406658 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TVnGB
via IFTTT

Introduction to Special Issue: Towards Ecologically Valid Protocols for the Assessment of Hearing and Hearing Devices.

Introduction to Special Issue: Towards Ecologically Valid Protocols for the Assessment of Hearing and Hearing Devices.

J Am Acad Audiol. 2016 Jul;27(7):502-3

Authors: Keidser G

PMID: 27406657 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/29TtDif
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.

Int J Audiol. 2016 Sep;55(9):483-490

Authors: Rothpletz AM, Moore AN, Preminger JE

Abstract
OBJECTIVE: This study measured help-seeking readiness and acceptance of existing internet-based hearing healthcare (IHHC) websites among a group of older adults who failed a hearing screening (Phase 1). It also explored the effects of brief training on participants' acceptance of IHHC (Phase 2).
STUDY SAMPLE: Twenty-seven adults (age 55+) who failed a hearing screening participated.
DESIGN: During Phase 1 participants were administered the University of Rhode Island Change Assessment (URICA) and patient technology acceptance model (PTAM) Questionnaire. During Phase 2 participants were randomly assigned to a training or control group. Training group participants attended an instructional class on existing IHHC websites. The control group received no training. The PTAM questionnaire was re-administered to both groups 4-6 weeks following the initial assessment.
RESULTS: The majority of participants were either considering or preparing to do something about their hearing loss, and were generally accepting of IHHC websites (Phase 1). The participants who underwent brief IHHC training reported increases in hearing healthcare knowledge and slight improvements in computer self-efficacy (Phase 2).
CONCLUSIONS: Older adults who fail hearing screenings may be good candidates for IHHC. The incorporation of a simple user-interface and short-term training may optimize the usability of future IHHC programs for this population.

PMID: 27409278 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/29FGHF0
via IFTTT

What Is the Prevalence of Tinnitus Worldwide?

What is the prevalence of tinnitus worldwide? As audiologists, the answer to this question may be something that we are interested in knowing. Unfortunately, a recent systematic review, conducted by McCormack et al (2016), suggests that it may not be an easy question to answer. 



from #Audiology via xlomafota13 on Inoreader http://ift.tt/29ymIwK
via IFTTT

Binaural Glimpses at the Cocktail Party?

ABSTRACT

Humans often have to focus on a single target sound while ignoring competing maskers in everyday situations. In such conditions, speech intelligibility (SI) is improved when a target speaker is spatially separated from a masker (spatial release from making, SRM) compared to situations where both are co-located. Such asymmetric spatial configurations lead to a ‘better-ear effect’ with improved signal-to-noise ratio (SNR) at one ear. However, maskers often surround the listener leading to more symmetric configurations where better-ear effects are absent in a long-term, wideband sense. Nevertheless, better-ear glimpses distributed across time and frequency persist and were suggested to account for SRM (Brungart and Iyer 2012). Here, speech reception was assessed using symmetric masker configurations while varying the spatio-temporal distribution of potential better-ear glimpses. Listeners were presented with a frontal target and eight single-talker maskers in four different symmetrical spatial configurations. Compared to the reference condition with co-located target and maskers, an SRM of up to 6 dB was observed. The SRM persisted when the frequency range of the maskers above or below 1500 Hz was replaced with stationary speech-shaped noise. Comparison to a recent short-time binaural SI model showed that better-ear glimpses can account for half the observed SRM, while binaural interaction utilizing phase differences is required to explain the other half.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/29FfEeX
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening

Volume 55, Issue 9, September 2016, pages 483-490<br/>10.1080/14992027.2016.1185804<br/>Ann M. Rothpletz

from #Audiology via xlomafota13 on Inoreader http://ift.tt/29yTmuE
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening

Volume 55, Issue 9, September 2016, pages 483-490<br/>10.1080/14992027.2016.1185804<br/>Ann M. Rothpletz

from #Audiology via ola Kala on Inoreader http://ift.tt/29yTmuE
via IFTTT

Acceptance of internet-based hearing healthcare among adults who fail a hearing screening

Volume 55, Issue 9, September 2016, pages 483-490<br/>10.1080/14992027.2016.1185804<br/>Ann M. Rothpletz

from #Audiology via ola Kala on Inoreader http://ift.tt/29yTmuE
via IFTTT

Descending Projections from the Inferior Colliculus to Medial Olivocochlear Efferents: Mice with Normal Hearing, Early Onset Hearing Loss, and Congenital Deafness

S03785955.gif

Publication date: Available online 12 July 2016
Source:Hearing Research
Author(s): Kirupa Suthakar, David K. Ryugo
Auditory efferent neurons reside in the brain and innervate the sensory hair cells of the cochlea to modulate incoming acoustic signals. Two groups of efferents have been described in mouse and this report will focus on the medial olivocochlear (MOC) system. Electrophysiological data suggest the MOC efferents function in selective listening by differentially attenuating auditory nerve fiber activity in quiet and noisy conditions. Because speech understanding in noise is impaired in age-related hearing loss, we asked whether pathologic changes in input to MOC neurons from higher centers could be involved. The present study investigated the anatomical nature of descending projections from the inferior colliculus (IC) to MOCs in 3-month old mice with normal hearing, and 6-month old mice with normal hearing, early onset hearing loss, and congenital deafness. Anterograde tracers were injected into the IC and retrograde tracers into the cochlea. Electron microscopic analysis of double-labelled tissue confirmed direct synaptic contact from the IC onto MOCs in all cohorts. These labelled terminals are indicative of excitatory neurotransmission because they contain round synaptic vesicles, exhibit asymmetric membrane specializations, and are co-labelled with antibodies against VGlut2, a glutamate transporter. 3D reconstructions of the terminal fields indicate that in normal hearing mice, descending projections from the IC are arranged tonotopically with low frequencies projecting laterally and progressively higher frequencies projecting more medially. Along the mediolateral axis, the projections of DBA/2 mice with acquired high frequency hearing loss were shifted medially towards expected higher frequency projecting regions. Shaker-2 mice with congenital deafness had a much broader spatial projection, revealing abnormalities in the topography of connections. These data suggest that loss in precision of IC directed MOC activation could contribute to impaired signal detection in noise.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/29F1gr5
via IFTTT

Descending Projections from the Inferior Colliculus to Medial Olivocochlear Efferents: Mice with Normal Hearing, Early Onset Hearing Loss, and Congenital Deafness

S03785955.gif

Publication date: Available online 12 July 2016
Source:Hearing Research
Author(s): Kirupa Suthakar, David K. Ryugo
Auditory efferent neurons reside in the brain and innervate the sensory hair cells of the cochlea to modulate incoming acoustic signals. Two groups of efferents have been described in mouse and this report will focus on the medial olivocochlear (MOC) system. Electrophysiological data suggest the MOC efferents function in selective listening by differentially attenuating auditory nerve fiber activity in quiet and noisy conditions. Because speech understanding in noise is impaired in age-related hearing loss, we asked whether pathologic changes in input to MOC neurons from higher centers could be involved. The present study investigated the anatomical nature of descending projections from the inferior colliculus (IC) to MOCs in 3-month old mice with normal hearing, and 6-month old mice with normal hearing, early onset hearing loss, and congenital deafness. Anterograde tracers were injected into the IC and retrograde tracers into the cochlea. Electron microscopic analysis of double-labelled tissue confirmed direct synaptic contact from the IC onto MOCs in all cohorts. These labelled terminals are indicative of excitatory neurotransmission because they contain round synaptic vesicles, exhibit asymmetric membrane specializations, and are co-labelled with antibodies against VGlut2, a glutamate transporter. 3D reconstructions of the terminal fields indicate that in normal hearing mice, descending projections from the IC are arranged tonotopically with low frequencies projecting laterally and progressively higher frequencies projecting more medially. Along the mediolateral axis, the projections of DBA/2 mice with acquired high frequency hearing loss were shifted medially towards expected higher frequency projecting regions. Shaker-2 mice with congenital deafness had a much broader spatial projection, revealing abnormalities in the topography of connections. These data suggest that loss in precision of IC directed MOC activation could contribute to impaired signal detection in noise.



from #Audiology via ola Kala on Inoreader http://ift.tt/29F1gr5
via IFTTT

Descending Projections from the Inferior Colliculus to Medial Olivocochlear Efferents: Mice with Normal Hearing, Early Onset Hearing Loss, and Congenital Deafness

S03785955.gif

Publication date: Available online 12 July 2016
Source:Hearing Research
Author(s): Kirupa Suthakar, David K. Ryugo
Auditory efferent neurons reside in the brain and innervate the sensory hair cells of the cochlea to modulate incoming acoustic signals. Two groups of efferents have been described in mouse and this report will focus on the medial olivocochlear (MOC) system. Electrophysiological data suggest the MOC efferents function in selective listening by differentially attenuating auditory nerve fiber activity in quiet and noisy conditions. Because speech understanding in noise is impaired in age-related hearing loss, we asked whether pathologic changes in input to MOC neurons from higher centers could be involved. The present study investigated the anatomical nature of descending projections from the inferior colliculus (IC) to MOCs in 3-month old mice with normal hearing, and 6-month old mice with normal hearing, early onset hearing loss, and congenital deafness. Anterograde tracers were injected into the IC and retrograde tracers into the cochlea. Electron microscopic analysis of double-labelled tissue confirmed direct synaptic contact from the IC onto MOCs in all cohorts. These labelled terminals are indicative of excitatory neurotransmission because they contain round synaptic vesicles, exhibit asymmetric membrane specializations, and are co-labelled with antibodies against VGlut2, a glutamate transporter. 3D reconstructions of the terminal fields indicate that in normal hearing mice, descending projections from the IC are arranged tonotopically with low frequencies projecting laterally and progressively higher frequencies projecting more medially. Along the mediolateral axis, the projections of DBA/2 mice with acquired high frequency hearing loss were shifted medially towards expected higher frequency projecting regions. Shaker-2 mice with congenital deafness had a much broader spatial projection, revealing abnormalities in the topography of connections. These data suggest that loss in precision of IC directed MOC activation could contribute to impaired signal detection in noise.



from #Audiology via ola Kala on Inoreader http://ift.tt/29F1gr5
via IFTTT