Κυριακή 10 Δεκεμβρίου 2017

Complex versus Standard Fittings: Part 1

An examination of the assumptions that underpin our field’s typical approach to fitting amplification, its limitations, and an introduction of the concept of Residual Capabilities, which maximizes the patient’s ability to use the hearing that remains.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jpIIQJ
via IFTTT

Complex versus Standard Fittings: Part 1

An examination of the assumptions that underpin our field’s typical approach to fitting amplification, its limitations, and an introduction of the concept of Residual Capabilities, which maximizes the patient’s ability to use the hearing that remains.

from #Audiology via ola Kala on Inoreader http://ift.tt/2jpIIQJ
via IFTTT

Complex versus Standard Fittings: Part 1

An examination of the assumptions that underpin our field’s typical approach to fitting amplification, its limitations, and an introduction of the concept of Residual Capabilities, which maximizes the patient’s ability to use the hearing that remains.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2jpIIQJ
via IFTTT

Minimally invasive laser vibrometry (MIVIB) with a floating mass transducer – A new method for objective evaluation of the middle ear demonstrated on stapes fixation

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jeremy Wales, Kilian Gladiné, Paul Van de Heyning, Vedat Topsakal, Magnus von Unge, Joris Dirckx
Ossicular fixation through otosclerosis, chronic otitis media and other pathologies, especially tympanosclerosis, are treated by surgery if hearing aids fail as an alternative. However, the best hearing outcome is often based on knowledge of the degree and location of the fixation. Objective methods to quantify the degree and position of the fixation are largely lacking. Laser vibrometry is a known method to detect ossicular fixation but clinical applicability remains limited. A new method, minimally invasive laser vibrometry (MIVIB), is presented to quantify ossicle mobility using laser vibrometry measurement through the ear canal after elevating the tympanic membrane, thus making the method feasible in minimally invasive explorative surgery. A floating mass transducer provides a clinically relevant transducer to drive ossicular vibration. This device was attached to the manubrium and drove vibrations at the same angle as the longitudinal axis of the stapes and was therefore used to assess ossicular chain mobility in a fresh-frozen temporal bone model with and without stapes fixation. The ratio between the umbo and incus long process was shown to be useful in assessing stapes fixation. The incus-to-umbo velocity ratio decreased by 15 dB when comparing the unfixated situation to stapes fixation up to 2.5 kHz. Such quantification of ossicular fixation using the incus-to-umbo velocity ratio would allow quick and objective analysis of ossicular chain fixations which will assist the surgeon in surgical planning and optimize hearing outcomes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iJWGc6
via IFTTT

The effect of simulated unilateral hearing loss on horizontal sound localization accuracy and recognition of speech in spatially separate competing speech

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Filip Asp, Anne-Marie Jakobsson, Erik Berninger
Unilateral hearing loss (UHL) occurs in 25% of cases of congenital sensorineural hearing loss. Due to the unilaterally reduced audibility associated with UHL, everyday demanding listening situations may be disrupted despite normal hearing in one ear. The aim of this study was to quantify acute changes in recognition of speech in spatially separate competing speech and sound localization accuracy, and relate those changes to two levels of temporary induced UHL (UHL30 and UHL43; suffixes denote the average hearing threshold across 0.5, 1, 2, and 4 kHz) for 8 normal-hearing adults. A within-subject repeated-measures design was used (normal binaural conditions, UHL30 and UHL43). The main outcome measures were the threshold for 40% correct speech recognition and the overall variance in sound localization accuracy quantified by an Error Index (0 = perfect performance, 1.0 = random performance). Distinct and statistically significant deterioration in speech recognition (2.0 dB increase in threshold, p < 0.01) and sound localization (Error Index increase of 0.16, p < 0.001) occurred in the UHL30 condition. Speech recognition did not significantly deteriorate further in the UHL43 condition (1.0 dB increase in speech recognition threshold, p > 0.05), while sound localization was additionally impaired (Error Index increase of 0.33, p < 0.01) with an associated large increase in individual variability. Qualitative analyses on a subject-by-subject basis showed that high-frequency audibility was important for speech recognition, while low-frequency audibility was important for horizontal sound localization accuracy. While the data might not be entirely applicable to individuals with long-standing UHL, the results suggest a need for intervention for mild-to-moderate UHL.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPZkBn
via IFTTT

The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Alan W. Archer-Boyd, Jack A. Holman, W. Owen Brimijoin
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iNNJyA
via IFTTT

Frequency selectivity in macaque monkeys measured using a notched-noise method

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jane A. Burton, Margit E. Dylla, Ramnarayan Ramachandran
The auditory system is thought to process complex sounds through overlapping bandpass filters. Frequency selectivity as estimated by auditory filters has been well quantified in humans and other mammalian species using behavioral and physiological methodologies, but little work has been done to examine frequency selectivity in nonhuman primates. In particular, knowledge of macaque frequency selectivity would help address the recent controversy over the sharpness of cochlear tuning in humans relative to other animal species. The purpose of our study was to investigate the frequency selectivity of macaque monkeys using a notched-noise paradigm. Four macaques were trained to detect tones in noises that were spectrally notched symmetrically and asymmetrically around the tone frequency. Masked tone thresholds decreased with increasing notch width. Auditory filter shapes were estimated using a rounded exponential function. Macaque auditory filters were symmetric at low noise levels and broader and more asymmetric at higher noise levels with broader low-frequency and steeper high-frequency tails. Macaque filter bandwidths (BW3dB) increased with increasing center frequency, similar to humans and other species. Estimates of equivalent rectangular bandwidth (ERB) and filter quality factor (QERB) suggest macaque filters are broader than human filters. These data shed further light on frequency selectivity across species and serve as a baseline for studies of neuronal frequency selectivity and frequency selectivity in subjects with hearing loss.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPDy0L
via IFTTT

Electrically-evoked auditory steady-state responses as neural correlates of loudness growth in cochlear implant users

Publication date: Available online 8 December 2017
Source:Hearing Research
Author(s): Maaike Van Eeckhoutte, Jan Wouters, Tom Francart
Loudness growth functions characterize how the loudness percept changes with current level between the threshold and most comfortable loudness level in cochlear implant users. Even though loudness growth functions are highly listener-dependent, currently default settings are used in clinical devices. This study investigated whether electrically-evoked auditory steady-state response amplitude growth functions correspond to behaviorally measured loudness growth functions. Seven cochlear implant listeners participated in two behavioral loudness growth tasks and an EEG recording session. The 40-Hz sinusoidally-amplitude-modulated pulse trains were presented to CI channels stimulating at a more apical and basal region of the cochlea, and were presented at different current levels encompassing the listeners' dynamic ranges. Behaviorally, loudness growth was measured using an Absolute Magnitude Estimation and a Graphical Rating Scale with loudness categories. A good correspondence was found between the response amplitude functions and the behavioral loudness growth functions. The results are encouraging for future advances in individual, more automatic, and objective fitting of cochlear implants.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iL0eed
via IFTTT

Self reported Hearing Difficulty, Tinnitus, and Normal Audiometric Thresholds, The National Health and Nutrition Examination Survey 1999-2002

Publication date: Available online 7 December 2017
Source:Hearing Research
Author(s): Christopher Spankovich, Victoria B. Gonzalez, Dan Su, Charles E. Bishop
Perceived hearing difficulty (HD) and/or tinnitus in the presence of normal audiometric thresholds present a clinical challenge. Yet, there is limited data regarding prevalence and determinant factors contributing to HD. Here we present estimates generalized to the non-institutionalized population of the United States based on the cross-sectional population-based study, the National Health and Nutrition and Examination Survey (NHANES) in 2,176 participants (20-69 years of age). Normal audiometric thresholds were defined by pure-tone average (PTA4) of 0.5, 1.0, 2.0, 4.0 kHz ≤ 25 dBHL in each ear. Hearing difficulty (HD) and tinnitus perception was self-reported. Of the 2,176 participants with complete data, 2,015 had normal audiometric thresholds based on PTA4; the prevalence of individuals with normal PTA4 that self-reported HD was 15%. The percentage of individuals with normal audiometric threshold and persistent tinnitus was 10.6%. Multivariate logistic regression adjusting for age, sex, and hearing thresholds identified the following variables related to increased odds of HD: tinnitus, balance issues, noise exposure, arthritis, vision difficulties, neuropathic symptoms, physical/mental/emotional issues; and for increased odds or reported persistent tinnitus: HD, diabetes, arthritis, vision difficulties, confusion/memory issues, balance issues, noise exposure, high alcohol consumption, neuropathic symptoms and analgesic use. Analyses using an alternative definition of normal hearing, pure-tone thresholds ≤ 25 dBHL at 0.5, 1.0, 2.0, 4.0, 6.0, and 8.0 kHz in each ear, revealed lower prevalence of HD and tinnitus, but comparable multivariate relationships. The findings suggest that prevalence of HD is dependent on how normal hearing is defined and the factors that impact odds of reported HD include tinnitus, noise exposure, mental/cognitive status, and other sensory deficits.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO9By3
via IFTTT

A framework for testing and comparing binaural models

Publication date: Available online 28 November 2017
Source:Hearing Research
Author(s): Mathias Dietz, Jean-Hugues Lestang, Piotr Majdak, Richard M. Stern, Torsten Marquardt, Stephan D. Ewert, William M. Hartmann, Dan F.M. Goodman
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iOFbHI
via IFTTT

Sustained frontal midline theta enhancements during effortful listening track working memory demands

Publication date: Available online 27 November 2017
Source:Hearing Research
Author(s): Matthew G. Wisniewski, Nandini Iyer, Eric R. Thompson, Brian D. Simpson
Recent studies demonstrate that frontal midline theta power (4–8 Hz) enhancements in the electroencephalogram (EEG) relate to effortful listening. It has been proposed that these enhancements reflect working memory demands. Here, the need to retain auditory information in working memory was manipulated in a 2-interval 2-alternative forced-choice delayed pitch discrimination task (“Which interval contained the higher pitch?”). On each trial, two square wave stimuli differing in pitch at an individual's ∼70.7% correct threshold were separated by a 3-second ISI. In a ‘Roving’ condition, the lowest pitch stimulus was randomly selected on each trial (uniform distribution from 840 – 1160 Hz). In a ‘Fixed’ condition, the lowest pitch was always 979 Hz. Critically, the ‘Fixed’ condition allowed one to know the correct response immediately following the first stimulus (e.g., if the first stimulus is 979 Hz, the second must be higher). In contrast, the ‘Roving’ condition required retention of the first tone for comparison to the second. Frontal midline theta enhancements during the ISI were only observed for the ‘Roving’ condition. Alpha (8–13 Hz) enhancements were apparent during the ISI, but did not differ significantly between conditions. Since conditions were matched for accuracy at threshold, results suggest that frontal midline theta enhancements will not always accompany difficult listening. Mixed results in the literature regarding frontal midline theta enhancements may be related to differences between tasks in regards to working memory demands. Alpha enhancements may reflect a task general set of effortful listening processes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO4ZYA
via IFTTT

Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features

Publication date: Available online 20 November 2017
Source:Hearing Research
Author(s): Srikanta K. Mishra, Carrick L. Talmadge
In recent years, there has been a growing interest to measure stimulus frequency otoacoustic emissions (SFOAEs) using sweep tones. While there are several advantages of the sweep-tone technique, one of the major problems with sweep-tone methodologies is the lack of an objective analysis procedure that considers and rejects individual noisy recordings or noisy segments. A new efficient data-driven method for rejecting noisy segments in SFOAE analysis is proposed and the normative features of SFOAEs are characterized in fifty normal-hearing young adults. The automated procedure involved phase detrending with a low-order polynomial and application of median and interquartile ranges for data outlier rejection from individual recordings. The SFOAE level and phase were analyzed using the least-squared fit method, and the noise floor was estimated using the error of the mean of the sweep level. Overall, the results of this study demonstrated the effectiveness of the automated noise rejection procedure and described the normative features of sweep-tone evoked SFOAEs in human adults.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kP6XZ0
via IFTTT

Minimally invasive laser vibrometry (MIVIB) with a floating mass transducer – A new method for objective evaluation of the middle ear demonstrated on stapes fixation

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jeremy Wales, Kilian Gladiné, Paul Van de Heyning, Vedat Topsakal, Magnus von Unge, Joris Dirckx
Ossicular fixation through otosclerosis, chronic otitis media and other pathologies, especially tympanosclerosis, are treated by surgery if hearing aids fail as an alternative. However, the best hearing outcome is often based on knowledge of the degree and location of the fixation. Objective methods to quantify the degree and position of the fixation are largely lacking. Laser vibrometry is a known method to detect ossicular fixation but clinical applicability remains limited. A new method, minimally invasive laser vibrometry (MIVIB), is presented to quantify ossicle mobility using laser vibrometry measurement through the ear canal after elevating the tympanic membrane, thus making the method feasible in minimally invasive explorative surgery. A floating mass transducer provides a clinically relevant transducer to drive ossicular vibration. This device was attached to the manubrium and drove vibrations at the same angle as the longitudinal axis of the stapes and was therefore used to assess ossicular chain mobility in a fresh-frozen temporal bone model with and without stapes fixation. The ratio between the umbo and incus long process was shown to be useful in assessing stapes fixation. The incus-to-umbo velocity ratio decreased by 15 dB when comparing the unfixated situation to stapes fixation up to 2.5 kHz. Such quantification of ossicular fixation using the incus-to-umbo velocity ratio would allow quick and objective analysis of ossicular chain fixations which will assist the surgeon in surgical planning and optimize hearing outcomes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iJWGc6
via IFTTT

The effect of simulated unilateral hearing loss on horizontal sound localization accuracy and recognition of speech in spatially separate competing speech

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Filip Asp, Anne-Marie Jakobsson, Erik Berninger
Unilateral hearing loss (UHL) occurs in 25% of cases of congenital sensorineural hearing loss. Due to the unilaterally reduced audibility associated with UHL, everyday demanding listening situations may be disrupted despite normal hearing in one ear. The aim of this study was to quantify acute changes in recognition of speech in spatially separate competing speech and sound localization accuracy, and relate those changes to two levels of temporary induced UHL (UHL30 and UHL43; suffixes denote the average hearing threshold across 0.5, 1, 2, and 4 kHz) for 8 normal-hearing adults. A within-subject repeated-measures design was used (normal binaural conditions, UHL30 and UHL43). The main outcome measures were the threshold for 40% correct speech recognition and the overall variance in sound localization accuracy quantified by an Error Index (0 = perfect performance, 1.0 = random performance). Distinct and statistically significant deterioration in speech recognition (2.0 dB increase in threshold, p < 0.01) and sound localization (Error Index increase of 0.16, p < 0.001) occurred in the UHL30 condition. Speech recognition did not significantly deteriorate further in the UHL43 condition (1.0 dB increase in speech recognition threshold, p > 0.05), while sound localization was additionally impaired (Error Index increase of 0.33, p < 0.01) with an associated large increase in individual variability. Qualitative analyses on a subject-by-subject basis showed that high-frequency audibility was important for speech recognition, while low-frequency audibility was important for horizontal sound localization accuracy. While the data might not be entirely applicable to individuals with long-standing UHL, the results suggest a need for intervention for mild-to-moderate UHL.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPZkBn
via IFTTT

The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Alan W. Archer-Boyd, Jack A. Holman, W. Owen Brimijoin
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iNNJyA
via IFTTT

Frequency selectivity in macaque monkeys measured using a notched-noise method

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jane A. Burton, Margit E. Dylla, Ramnarayan Ramachandran
The auditory system is thought to process complex sounds through overlapping bandpass filters. Frequency selectivity as estimated by auditory filters has been well quantified in humans and other mammalian species using behavioral and physiological methodologies, but little work has been done to examine frequency selectivity in nonhuman primates. In particular, knowledge of macaque frequency selectivity would help address the recent controversy over the sharpness of cochlear tuning in humans relative to other animal species. The purpose of our study was to investigate the frequency selectivity of macaque monkeys using a notched-noise paradigm. Four macaques were trained to detect tones in noises that were spectrally notched symmetrically and asymmetrically around the tone frequency. Masked tone thresholds decreased with increasing notch width. Auditory filter shapes were estimated using a rounded exponential function. Macaque auditory filters were symmetric at low noise levels and broader and more asymmetric at higher noise levels with broader low-frequency and steeper high-frequency tails. Macaque filter bandwidths (BW3dB) increased with increasing center frequency, similar to humans and other species. Estimates of equivalent rectangular bandwidth (ERB) and filter quality factor (QERB) suggest macaque filters are broader than human filters. These data shed further light on frequency selectivity across species and serve as a baseline for studies of neuronal frequency selectivity and frequency selectivity in subjects with hearing loss.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPDy0L
via IFTTT

Electrically-evoked auditory steady-state responses as neural correlates of loudness growth in cochlear implant users

Publication date: Available online 8 December 2017
Source:Hearing Research
Author(s): Maaike Van Eeckhoutte, Jan Wouters, Tom Francart
Loudness growth functions characterize how the loudness percept changes with current level between the threshold and most comfortable loudness level in cochlear implant users. Even though loudness growth functions are highly listener-dependent, currently default settings are used in clinical devices. This study investigated whether electrically-evoked auditory steady-state response amplitude growth functions correspond to behaviorally measured loudness growth functions. Seven cochlear implant listeners participated in two behavioral loudness growth tasks and an EEG recording session. The 40-Hz sinusoidally-amplitude-modulated pulse trains were presented to CI channels stimulating at a more apical and basal region of the cochlea, and were presented at different current levels encompassing the listeners' dynamic ranges. Behaviorally, loudness growth was measured using an Absolute Magnitude Estimation and a Graphical Rating Scale with loudness categories. A good correspondence was found between the response amplitude functions and the behavioral loudness growth functions. The results are encouraging for future advances in individual, more automatic, and objective fitting of cochlear implants.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iL0eed
via IFTTT

Self reported Hearing Difficulty, Tinnitus, and Normal Audiometric Thresholds, The National Health and Nutrition Examination Survey 1999-2002

Publication date: Available online 7 December 2017
Source:Hearing Research
Author(s): Christopher Spankovich, Victoria B. Gonzalez, Dan Su, Charles E. Bishop
Perceived hearing difficulty (HD) and/or tinnitus in the presence of normal audiometric thresholds present a clinical challenge. Yet, there is limited data regarding prevalence and determinant factors contributing to HD. Here we present estimates generalized to the non-institutionalized population of the United States based on the cross-sectional population-based study, the National Health and Nutrition and Examination Survey (NHANES) in 2,176 participants (20-69 years of age). Normal audiometric thresholds were defined by pure-tone average (PTA4) of 0.5, 1.0, 2.0, 4.0 kHz ≤ 25 dBHL in each ear. Hearing difficulty (HD) and tinnitus perception was self-reported. Of the 2,176 participants with complete data, 2,015 had normal audiometric thresholds based on PTA4; the prevalence of individuals with normal PTA4 that self-reported HD was 15%. The percentage of individuals with normal audiometric threshold and persistent tinnitus was 10.6%. Multivariate logistic regression adjusting for age, sex, and hearing thresholds identified the following variables related to increased odds of HD: tinnitus, balance issues, noise exposure, arthritis, vision difficulties, neuropathic symptoms, physical/mental/emotional issues; and for increased odds or reported persistent tinnitus: HD, diabetes, arthritis, vision difficulties, confusion/memory issues, balance issues, noise exposure, high alcohol consumption, neuropathic symptoms and analgesic use. Analyses using an alternative definition of normal hearing, pure-tone thresholds ≤ 25 dBHL at 0.5, 1.0, 2.0, 4.0, 6.0, and 8.0 kHz in each ear, revealed lower prevalence of HD and tinnitus, but comparable multivariate relationships. The findings suggest that prevalence of HD is dependent on how normal hearing is defined and the factors that impact odds of reported HD include tinnitus, noise exposure, mental/cognitive status, and other sensory deficits.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO9By3
via IFTTT

A framework for testing and comparing binaural models

Publication date: Available online 28 November 2017
Source:Hearing Research
Author(s): Mathias Dietz, Jean-Hugues Lestang, Piotr Majdak, Richard M. Stern, Torsten Marquardt, Stephan D. Ewert, William M. Hartmann, Dan F.M. Goodman
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iOFbHI
via IFTTT

Sustained frontal midline theta enhancements during effortful listening track working memory demands

Publication date: Available online 27 November 2017
Source:Hearing Research
Author(s): Matthew G. Wisniewski, Nandini Iyer, Eric R. Thompson, Brian D. Simpson
Recent studies demonstrate that frontal midline theta power (4–8 Hz) enhancements in the electroencephalogram (EEG) relate to effortful listening. It has been proposed that these enhancements reflect working memory demands. Here, the need to retain auditory information in working memory was manipulated in a 2-interval 2-alternative forced-choice delayed pitch discrimination task (“Which interval contained the higher pitch?”). On each trial, two square wave stimuli differing in pitch at an individual's ∼70.7% correct threshold were separated by a 3-second ISI. In a ‘Roving’ condition, the lowest pitch stimulus was randomly selected on each trial (uniform distribution from 840 – 1160 Hz). In a ‘Fixed’ condition, the lowest pitch was always 979 Hz. Critically, the ‘Fixed’ condition allowed one to know the correct response immediately following the first stimulus (e.g., if the first stimulus is 979 Hz, the second must be higher). In contrast, the ‘Roving’ condition required retention of the first tone for comparison to the second. Frontal midline theta enhancements during the ISI were only observed for the ‘Roving’ condition. Alpha (8–13 Hz) enhancements were apparent during the ISI, but did not differ significantly between conditions. Since conditions were matched for accuracy at threshold, results suggest that frontal midline theta enhancements will not always accompany difficult listening. Mixed results in the literature regarding frontal midline theta enhancements may be related to differences between tasks in regards to working memory demands. Alpha enhancements may reflect a task general set of effortful listening processes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO4ZYA
via IFTTT

Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features

Publication date: Available online 20 November 2017
Source:Hearing Research
Author(s): Srikanta K. Mishra, Carrick L. Talmadge
In recent years, there has been a growing interest to measure stimulus frequency otoacoustic emissions (SFOAEs) using sweep tones. While there are several advantages of the sweep-tone technique, one of the major problems with sweep-tone methodologies is the lack of an objective analysis procedure that considers and rejects individual noisy recordings or noisy segments. A new efficient data-driven method for rejecting noisy segments in SFOAE analysis is proposed and the normative features of SFOAEs are characterized in fifty normal-hearing young adults. The automated procedure involved phase detrending with a low-order polynomial and application of median and interquartile ranges for data outlier rejection from individual recordings. The SFOAE level and phase were analyzed using the least-squared fit method, and the noise floor was estimated using the error of the mean of the sweep level. Overall, the results of this study demonstrated the effectiveness of the automated noise rejection procedure and described the normative features of sweep-tone evoked SFOAEs in human adults.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kP6XZ0
via IFTTT

Minimally invasive laser vibrometry (MIVIB) with a floating mass transducer – A new method for objective evaluation of the middle ear demonstrated on stapes fixation

elsevier-non-solus.png

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jeremy Wales, Kilian Gladiné, Paul Van de Heyning, Vedat Topsakal, Magnus von Unge, Joris Dirckx
Ossicular fixation through otosclerosis, chronic otitis media and other pathologies, especially tympanosclerosis, are treated by surgery if hearing aids fail as an alternative. However, the best hearing outcome is often based on knowledge of the degree and location of the fixation. Objective methods to quantify the degree and position of the fixation are largely lacking. Laser vibrometry is a known method to detect ossicular fixation but clinical applicability remains limited. A new method, minimally invasive laser vibrometry (MIVIB), is presented to quantify ossicle mobility using laser vibrometry measurement through the ear canal after elevating the tympanic membrane, thus making the method feasible in minimally invasive explorative surgery. A floating mass transducer provides a clinically relevant transducer to drive ossicular vibration. This device was attached to the manubrium and drove vibrations at the same angle as the longitudinal axis of the stapes and was therefore used to assess ossicular chain mobility in a fresh-frozen temporal bone model with and without stapes fixation. The ratio between the umbo and incus long process was shown to be useful in assessing stapes fixation. The incus-to-umbo velocity ratio decreased by 15 dB when comparing the unfixated situation to stapes fixation up to 2.5 kHz. Such quantification of ossicular fixation using the incus-to-umbo velocity ratio would allow quick and objective analysis of ossicular chain fixations which will assist the surgeon in surgical planning and optimize hearing outcomes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iJWGc6
via IFTTT

The effect of simulated unilateral hearing loss on horizontal sound localization accuracy and recognition of speech in spatially separate competing speech

elsevier-non-solus.png

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Filip Asp, Anne-Marie Jakobsson, Erik Berninger
Unilateral hearing loss (UHL) occurs in 25% of cases of congenital sensorineural hearing loss. Due to the unilaterally reduced audibility associated with UHL, everyday demanding listening situations may be disrupted despite normal hearing in one ear. The aim of this study was to quantify acute changes in recognition of speech in spatially separate competing speech and sound localization accuracy, and relate those changes to two levels of temporary induced UHL (UHL30 and UHL43; suffixes denote the average hearing threshold across 0.5, 1, 2, and 4 kHz) for 8 normal-hearing adults. A within-subject repeated-measures design was used (normal binaural conditions, UHL30 and UHL43). The main outcome measures were the threshold for 40% correct speech recognition and the overall variance in sound localization accuracy quantified by an Error Index (0 = perfect performance, 1.0 = random performance). Distinct and statistically significant deterioration in speech recognition (2.0 dB increase in threshold, p < 0.01) and sound localization (Error Index increase of 0.16, p < 0.001) occurred in the UHL30 condition. Speech recognition did not significantly deteriorate further in the UHL43 condition (1.0 dB increase in speech recognition threshold, p > 0.05), while sound localization was additionally impaired (Error Index increase of 0.33, p < 0.01) with an associated large increase in individual variability. Qualitative analyses on a subject-by-subject basis showed that high-frequency audibility was important for speech recognition, while low-frequency audibility was important for horizontal sound localization accuracy. While the data might not be entirely applicable to individuals with long-standing UHL, the results suggest a need for intervention for mild-to-moderate UHL.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPZkBn
via IFTTT

The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Alan W. Archer-Boyd, Jack A. Holman, W. Owen Brimijoin
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iNNJyA
via IFTTT

Frequency selectivity in macaque monkeys measured using a notched-noise method

S03785955.gif

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jane A. Burton, Margit E. Dylla, Ramnarayan Ramachandran
The auditory system is thought to process complex sounds through overlapping bandpass filters. Frequency selectivity as estimated by auditory filters has been well quantified in humans and other mammalian species using behavioral and physiological methodologies, but little work has been done to examine frequency selectivity in nonhuman primates. In particular, knowledge of macaque frequency selectivity would help address the recent controversy over the sharpness of cochlear tuning in humans relative to other animal species. The purpose of our study was to investigate the frequency selectivity of macaque monkeys using a notched-noise paradigm. Four macaques were trained to detect tones in noises that were spectrally notched symmetrically and asymmetrically around the tone frequency. Masked tone thresholds decreased with increasing notch width. Auditory filter shapes were estimated using a rounded exponential function. Macaque auditory filters were symmetric at low noise levels and broader and more asymmetric at higher noise levels with broader low-frequency and steeper high-frequency tails. Macaque filter bandwidths (BW3dB) increased with increasing center frequency, similar to humans and other species. Estimates of equivalent rectangular bandwidth (ERB) and filter quality factor (QERB) suggest macaque filters are broader than human filters. These data shed further light on frequency selectivity across species and serve as a baseline for studies of neuronal frequency selectivity and frequency selectivity in subjects with hearing loss.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPDy0L
via IFTTT

Electrically-evoked auditory steady-state responses as neural correlates of loudness growth in cochlear implant users

S03785955.gif

Publication date: Available online 8 December 2017
Source:Hearing Research
Author(s): Maaike Van Eeckhoutte, Jan Wouters, Tom Francart
Loudness growth functions characterize how the loudness percept changes with current level between the threshold and most comfortable loudness level in cochlear implant users. Even though loudness growth functions are highly listener-dependent, currently default settings are used in clinical devices. This study investigated whether electrically-evoked auditory steady-state response amplitude growth functions correspond to behaviorally measured loudness growth functions. Seven cochlear implant listeners participated in two behavioral loudness growth tasks and an EEG recording session. The 40-Hz sinusoidally-amplitude-modulated pulse trains were presented to CI channels stimulating at a more apical and basal region of the cochlea, and were presented at different current levels encompassing the listeners' dynamic ranges. Behaviorally, loudness growth was measured using an Absolute Magnitude Estimation and a Graphical Rating Scale with loudness categories. A good correspondence was found between the response amplitude functions and the behavioral loudness growth functions. The results are encouraging for future advances in individual, more automatic, and objective fitting of cochlear implants.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iL0eed
via IFTTT

Self reported Hearing Difficulty, Tinnitus, and Normal Audiometric Thresholds, The National Health and Nutrition Examination Survey 1999-2002

S03785955.gif

Publication date: Available online 7 December 2017
Source:Hearing Research
Author(s): Christopher Spankovich, Victoria B. Gonzalez, Dan Su, Charles E. Bishop
Perceived hearing difficulty (HD) and/or tinnitus in the presence of normal audiometric thresholds present a clinical challenge. Yet, there is limited data regarding prevalence and determinant factors contributing to HD. Here we present estimates generalized to the non-institutionalized population of the United States based on the cross-sectional population-based study, the National Health and Nutrition and Examination Survey (NHANES) in 2,176 participants (20-69 years of age). Normal audiometric thresholds were defined by pure-tone average (PTA4) of 0.5, 1.0, 2.0, 4.0 kHz ≤ 25 dBHL in each ear. Hearing difficulty (HD) and tinnitus perception was self-reported. Of the 2,176 participants with complete data, 2,015 had normal audiometric thresholds based on PTA4; the prevalence of individuals with normal PTA4 that self-reported HD was 15%. The percentage of individuals with normal audiometric threshold and persistent tinnitus was 10.6%. Multivariate logistic regression adjusting for age, sex, and hearing thresholds identified the following variables related to increased odds of HD: tinnitus, balance issues, noise exposure, arthritis, vision difficulties, neuropathic symptoms, physical/mental/emotional issues; and for increased odds or reported persistent tinnitus: HD, diabetes, arthritis, vision difficulties, confusion/memory issues, balance issues, noise exposure, high alcohol consumption, neuropathic symptoms and analgesic use. Analyses using an alternative definition of normal hearing, pure-tone thresholds ≤ 25 dBHL at 0.5, 1.0, 2.0, 4.0, 6.0, and 8.0 kHz in each ear, revealed lower prevalence of HD and tinnitus, but comparable multivariate relationships. The findings suggest that prevalence of HD is dependent on how normal hearing is defined and the factors that impact odds of reported HD include tinnitus, noise exposure, mental/cognitive status, and other sensory deficits.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO9By3
via IFTTT

A framework for testing and comparing binaural models

S03785955.gif

Publication date: Available online 28 November 2017
Source:Hearing Research
Author(s): Mathias Dietz, Jean-Hugues Lestang, Piotr Majdak, Richard M. Stern, Torsten Marquardt, Stephan D. Ewert, William M. Hartmann, Dan F.M. Goodman
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iOFbHI
via IFTTT

Sustained frontal midline theta enhancements during effortful listening track working memory demands

S03785955.gif

Publication date: Available online 27 November 2017
Source:Hearing Research
Author(s): Matthew G. Wisniewski, Nandini Iyer, Eric R. Thompson, Brian D. Simpson
Recent studies demonstrate that frontal midline theta power (4–8 Hz) enhancements in the electroencephalogram (EEG) relate to effortful listening. It has been proposed that these enhancements reflect working memory demands. Here, the need to retain auditory information in working memory was manipulated in a 2-interval 2-alternative forced-choice delayed pitch discrimination task (“Which interval contained the higher pitch?”). On each trial, two square wave stimuli differing in pitch at an individual's ∼70.7% correct threshold were separated by a 3-second ISI. In a ‘Roving’ condition, the lowest pitch stimulus was randomly selected on each trial (uniform distribution from 840 – 1160 Hz). In a ‘Fixed’ condition, the lowest pitch was always 979 Hz. Critically, the ‘Fixed’ condition allowed one to know the correct response immediately following the first stimulus (e.g., if the first stimulus is 979 Hz, the second must be higher). In contrast, the ‘Roving’ condition required retention of the first tone for comparison to the second. Frontal midline theta enhancements during the ISI were only observed for the ‘Roving’ condition. Alpha (8–13 Hz) enhancements were apparent during the ISI, but did not differ significantly between conditions. Since conditions were matched for accuracy at threshold, results suggest that frontal midline theta enhancements will not always accompany difficult listening. Mixed results in the literature regarding frontal midline theta enhancements may be related to differences between tasks in regards to working memory demands. Alpha enhancements may reflect a task general set of effortful listening processes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO4ZYA
via IFTTT

Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features

elsevier-non-solus.png

Publication date: Available online 20 November 2017
Source:Hearing Research
Author(s): Srikanta K. Mishra, Carrick L. Talmadge
In recent years, there has been a growing interest to measure stimulus frequency otoacoustic emissions (SFOAEs) using sweep tones. While there are several advantages of the sweep-tone technique, one of the major problems with sweep-tone methodologies is the lack of an objective analysis procedure that considers and rejects individual noisy recordings or noisy segments. A new efficient data-driven method for rejecting noisy segments in SFOAE analysis is proposed and the normative features of SFOAEs are characterized in fifty normal-hearing young adults. The automated procedure involved phase detrending with a low-order polynomial and application of median and interquartile ranges for data outlier rejection from individual recordings. The SFOAE level and phase were analyzed using the least-squared fit method, and the noise floor was estimated using the error of the mean of the sweep level. Overall, the results of this study demonstrated the effectiveness of the automated noise rejection procedure and described the normative features of sweep-tone evoked SFOAEs in human adults.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kP6XZ0
via IFTTT

Minimally invasive laser vibrometry (MIVIB) with a floating mass transducer – A new method for objective evaluation of the middle ear demonstrated on stapes fixation

elsevier-non-solus.png

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jeremy Wales, Kilian Gladiné, Paul Van de Heyning, Vedat Topsakal, Magnus von Unge, Joris Dirckx
Ossicular fixation through otosclerosis, chronic otitis media and other pathologies, especially tympanosclerosis, are treated by surgery if hearing aids fail as an alternative. However, the best hearing outcome is often based on knowledge of the degree and location of the fixation. Objective methods to quantify the degree and position of the fixation are largely lacking. Laser vibrometry is a known method to detect ossicular fixation but clinical applicability remains limited. A new method, minimally invasive laser vibrometry (MIVIB), is presented to quantify ossicle mobility using laser vibrometry measurement through the ear canal after elevating the tympanic membrane, thus making the method feasible in minimally invasive explorative surgery. A floating mass transducer provides a clinically relevant transducer to drive ossicular vibration. This device was attached to the manubrium and drove vibrations at the same angle as the longitudinal axis of the stapes and was therefore used to assess ossicular chain mobility in a fresh-frozen temporal bone model with and without stapes fixation. The ratio between the umbo and incus long process was shown to be useful in assessing stapes fixation. The incus-to-umbo velocity ratio decreased by 15 dB when comparing the unfixated situation to stapes fixation up to 2.5 kHz. Such quantification of ossicular fixation using the incus-to-umbo velocity ratio would allow quick and objective analysis of ossicular chain fixations which will assist the surgeon in surgical planning and optimize hearing outcomes.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iJWGc6
via IFTTT

The effect of simulated unilateral hearing loss on horizontal sound localization accuracy and recognition of speech in spatially separate competing speech

elsevier-non-solus.png

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Filip Asp, Anne-Marie Jakobsson, Erik Berninger
Unilateral hearing loss (UHL) occurs in 25% of cases of congenital sensorineural hearing loss. Due to the unilaterally reduced audibility associated with UHL, everyday demanding listening situations may be disrupted despite normal hearing in one ear. The aim of this study was to quantify acute changes in recognition of speech in spatially separate competing speech and sound localization accuracy, and relate those changes to two levels of temporary induced UHL (UHL30 and UHL43; suffixes denote the average hearing threshold across 0.5, 1, 2, and 4 kHz) for 8 normal-hearing adults. A within-subject repeated-measures design was used (normal binaural conditions, UHL30 and UHL43). The main outcome measures were the threshold for 40% correct speech recognition and the overall variance in sound localization accuracy quantified by an Error Index (0 = perfect performance, 1.0 = random performance). Distinct and statistically significant deterioration in speech recognition (2.0 dB increase in threshold, p < 0.01) and sound localization (Error Index increase of 0.16, p < 0.001) occurred in the UHL30 condition. Speech recognition did not significantly deteriorate further in the UHL43 condition (1.0 dB increase in speech recognition threshold, p > 0.05), while sound localization was additionally impaired (Error Index increase of 0.33, p < 0.01) with an associated large increase in individual variability. Qualitative analyses on a subject-by-subject basis showed that high-frequency audibility was important for speech recognition, while low-frequency audibility was important for horizontal sound localization accuracy. While the data might not be entirely applicable to individuals with long-standing UHL, the results suggest a need for intervention for mild-to-moderate UHL.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2kPZkBn
via IFTTT

The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Alan W. Archer-Boyd, Jack A. Holman, W. Owen Brimijoin
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iNNJyA
via IFTTT

Frequency selectivity in macaque monkeys measured using a notched-noise method

S03785955.gif

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jane A. Burton, Margit E. Dylla, Ramnarayan Ramachandran
The auditory system is thought to process complex sounds through overlapping bandpass filters. Frequency selectivity as estimated by auditory filters has been well quantified in humans and other mammalian species using behavioral and physiological methodologies, but little work has been done to examine frequency selectivity in nonhuman primates. In particular, knowledge of macaque frequency selectivity would help address the recent controversy over the sharpness of cochlear tuning in humans relative to other animal species. The purpose of our study was to investigate the frequency selectivity of macaque monkeys using a notched-noise paradigm. Four macaques were trained to detect tones in noises that were spectrally notched symmetrically and asymmetrically around the tone frequency. Masked tone thresholds decreased with increasing notch width. Auditory filter shapes were estimated using a rounded exponential function. Macaque auditory filters were symmetric at low noise levels and broader and more asymmetric at higher noise levels with broader low-frequency and steeper high-frequency tails. Macaque filter bandwidths (BW3dB) increased with increasing center frequency, similar to humans and other species. Estimates of equivalent rectangular bandwidth (ERB) and filter quality factor (QERB) suggest macaque filters are broader than human filters. These data shed further light on frequency selectivity across species and serve as a baseline for studies of neuronal frequency selectivity and frequency selectivity in subjects with hearing loss.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2kPDy0L
via IFTTT

Electrically-evoked auditory steady-state responses as neural correlates of loudness growth in cochlear implant users

S03785955.gif

Publication date: Available online 8 December 2017
Source:Hearing Research
Author(s): Maaike Van Eeckhoutte, Jan Wouters, Tom Francart
Loudness growth functions characterize how the loudness percept changes with current level between the threshold and most comfortable loudness level in cochlear implant users. Even though loudness growth functions are highly listener-dependent, currently default settings are used in clinical devices. This study investigated whether electrically-evoked auditory steady-state response amplitude growth functions correspond to behaviorally measured loudness growth functions. Seven cochlear implant listeners participated in two behavioral loudness growth tasks and an EEG recording session. The 40-Hz sinusoidally-amplitude-modulated pulse trains were presented to CI channels stimulating at a more apical and basal region of the cochlea, and were presented at different current levels encompassing the listeners' dynamic ranges. Behaviorally, loudness growth was measured using an Absolute Magnitude Estimation and a Graphical Rating Scale with loudness categories. A good correspondence was found between the response amplitude functions and the behavioral loudness growth functions. The results are encouraging for future advances in individual, more automatic, and objective fitting of cochlear implants.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iL0eed
via IFTTT

Self reported Hearing Difficulty, Tinnitus, and Normal Audiometric Thresholds, The National Health and Nutrition Examination Survey 1999-2002

S03785955.gif

Publication date: Available online 7 December 2017
Source:Hearing Research
Author(s): Christopher Spankovich, Victoria B. Gonzalez, Dan Su, Charles E. Bishop
Perceived hearing difficulty (HD) and/or tinnitus in the presence of normal audiometric thresholds present a clinical challenge. Yet, there is limited data regarding prevalence and determinant factors contributing to HD. Here we present estimates generalized to the non-institutionalized population of the United States based on the cross-sectional population-based study, the National Health and Nutrition and Examination Survey (NHANES) in 2,176 participants (20-69 years of age). Normal audiometric thresholds were defined by pure-tone average (PTA4) of 0.5, 1.0, 2.0, 4.0 kHz ≤ 25 dBHL in each ear. Hearing difficulty (HD) and tinnitus perception was self-reported. Of the 2,176 participants with complete data, 2,015 had normal audiometric thresholds based on PTA4; the prevalence of individuals with normal PTA4 that self-reported HD was 15%. The percentage of individuals with normal audiometric threshold and persistent tinnitus was 10.6%. Multivariate logistic regression adjusting for age, sex, and hearing thresholds identified the following variables related to increased odds of HD: tinnitus, balance issues, noise exposure, arthritis, vision difficulties, neuropathic symptoms, physical/mental/emotional issues; and for increased odds or reported persistent tinnitus: HD, diabetes, arthritis, vision difficulties, confusion/memory issues, balance issues, noise exposure, high alcohol consumption, neuropathic symptoms and analgesic use. Analyses using an alternative definition of normal hearing, pure-tone thresholds ≤ 25 dBHL at 0.5, 1.0, 2.0, 4.0, 6.0, and 8.0 kHz in each ear, revealed lower prevalence of HD and tinnitus, but comparable multivariate relationships. The findings suggest that prevalence of HD is dependent on how normal hearing is defined and the factors that impact odds of reported HD include tinnitus, noise exposure, mental/cognitive status, and other sensory deficits.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2kO9By3
via IFTTT

A framework for testing and comparing binaural models

S03785955.gif

Publication date: Available online 28 November 2017
Source:Hearing Research
Author(s): Mathias Dietz, Jean-Hugues Lestang, Piotr Majdak, Richard M. Stern, Torsten Marquardt, Stephan D. Ewert, William M. Hartmann, Dan F.M. Goodman
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2iOFbHI
via IFTTT

Sustained frontal midline theta enhancements during effortful listening track working memory demands

S03785955.gif

Publication date: Available online 27 November 2017
Source:Hearing Research
Author(s): Matthew G. Wisniewski, Nandini Iyer, Eric R. Thompson, Brian D. Simpson
Recent studies demonstrate that frontal midline theta power (4–8 Hz) enhancements in the electroencephalogram (EEG) relate to effortful listening. It has been proposed that these enhancements reflect working memory demands. Here, the need to retain auditory information in working memory was manipulated in a 2-interval 2-alternative forced-choice delayed pitch discrimination task (“Which interval contained the higher pitch?”). On each trial, two square wave stimuli differing in pitch at an individual's ∼70.7% correct threshold were separated by a 3-second ISI. In a ‘Roving’ condition, the lowest pitch stimulus was randomly selected on each trial (uniform distribution from 840 – 1160 Hz). In a ‘Fixed’ condition, the lowest pitch was always 979 Hz. Critically, the ‘Fixed’ condition allowed one to know the correct response immediately following the first stimulus (e.g., if the first stimulus is 979 Hz, the second must be higher). In contrast, the ‘Roving’ condition required retention of the first tone for comparison to the second. Frontal midline theta enhancements during the ISI were only observed for the ‘Roving’ condition. Alpha (8–13 Hz) enhancements were apparent during the ISI, but did not differ significantly between conditions. Since conditions were matched for accuracy at threshold, results suggest that frontal midline theta enhancements will not always accompany difficult listening. Mixed results in the literature regarding frontal midline theta enhancements may be related to differences between tasks in regards to working memory demands. Alpha enhancements may reflect a task general set of effortful listening processes.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2kO4ZYA
via IFTTT

Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features

elsevier-non-solus.png

Publication date: Available online 20 November 2017
Source:Hearing Research
Author(s): Srikanta K. Mishra, Carrick L. Talmadge
In recent years, there has been a growing interest to measure stimulus frequency otoacoustic emissions (SFOAEs) using sweep tones. While there are several advantages of the sweep-tone technique, one of the major problems with sweep-tone methodologies is the lack of an objective analysis procedure that considers and rejects individual noisy recordings or noisy segments. A new efficient data-driven method for rejecting noisy segments in SFOAE analysis is proposed and the normative features of SFOAEs are characterized in fifty normal-hearing young adults. The automated procedure involved phase detrending with a low-order polynomial and application of median and interquartile ranges for data outlier rejection from individual recordings. The SFOAE level and phase were analyzed using the least-squared fit method, and the noise floor was estimated using the error of the mean of the sweep level. Overall, the results of this study demonstrated the effectiveness of the automated noise rejection procedure and described the normative features of sweep-tone evoked SFOAEs in human adults.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2kP6XZ0
via IFTTT

Minimally invasive laser vibrometry (MIVIB) with a floating mass transducer – A new method for objective evaluation of the middle ear demonstrated on stapes fixation

elsevier-non-solus.png

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jeremy Wales, Kilian Gladiné, Paul Van de Heyning, Vedat Topsakal, Magnus von Unge, Joris Dirckx
Ossicular fixation through otosclerosis, chronic otitis media and other pathologies, especially tympanosclerosis, are treated by surgery if hearing aids fail as an alternative. However, the best hearing outcome is often based on knowledge of the degree and location of the fixation. Objective methods to quantify the degree and position of the fixation are largely lacking. Laser vibrometry is a known method to detect ossicular fixation but clinical applicability remains limited. A new method, minimally invasive laser vibrometry (MIVIB), is presented to quantify ossicle mobility using laser vibrometry measurement through the ear canal after elevating the tympanic membrane, thus making the method feasible in minimally invasive explorative surgery. A floating mass transducer provides a clinically relevant transducer to drive ossicular vibration. This device was attached to the manubrium and drove vibrations at the same angle as the longitudinal axis of the stapes and was therefore used to assess ossicular chain mobility in a fresh-frozen temporal bone model with and without stapes fixation. The ratio between the umbo and incus long process was shown to be useful in assessing stapes fixation. The incus-to-umbo velocity ratio decreased by 15 dB when comparing the unfixated situation to stapes fixation up to 2.5 kHz. Such quantification of ossicular fixation using the incus-to-umbo velocity ratio would allow quick and objective analysis of ossicular chain fixations which will assist the surgeon in surgical planning and optimize hearing outcomes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iJWGc6
via IFTTT

The effect of simulated unilateral hearing loss on horizontal sound localization accuracy and recognition of speech in spatially separate competing speech

elsevier-non-solus.png

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Filip Asp, Anne-Marie Jakobsson, Erik Berninger
Unilateral hearing loss (UHL) occurs in 25% of cases of congenital sensorineural hearing loss. Due to the unilaterally reduced audibility associated with UHL, everyday demanding listening situations may be disrupted despite normal hearing in one ear. The aim of this study was to quantify acute changes in recognition of speech in spatially separate competing speech and sound localization accuracy, and relate those changes to two levels of temporary induced UHL (UHL30 and UHL43; suffixes denote the average hearing threshold across 0.5, 1, 2, and 4 kHz) for 8 normal-hearing adults. A within-subject repeated-measures design was used (normal binaural conditions, UHL30 and UHL43). The main outcome measures were the threshold for 40% correct speech recognition and the overall variance in sound localization accuracy quantified by an Error Index (0 = perfect performance, 1.0 = random performance). Distinct and statistically significant deterioration in speech recognition (2.0 dB increase in threshold, p < 0.01) and sound localization (Error Index increase of 0.16, p < 0.001) occurred in the UHL30 condition. Speech recognition did not significantly deteriorate further in the UHL43 condition (1.0 dB increase in speech recognition threshold, p > 0.05), while sound localization was additionally impaired (Error Index increase of 0.33, p < 0.01) with an associated large increase in individual variability. Qualitative analyses on a subject-by-subject basis showed that high-frequency audibility was important for speech recognition, while low-frequency audibility was important for horizontal sound localization accuracy. While the data might not be entirely applicable to individuals with long-standing UHL, the results suggest a need for intervention for mild-to-moderate UHL.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPZkBn
via IFTTT

The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Alan W. Archer-Boyd, Jack A. Holman, W. Owen Brimijoin
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iNNJyA
via IFTTT

Frequency selectivity in macaque monkeys measured using a notched-noise method

S03785955.gif

Publication date: January 2018
Source:Hearing Research, Volume 357
Author(s): Jane A. Burton, Margit E. Dylla, Ramnarayan Ramachandran
The auditory system is thought to process complex sounds through overlapping bandpass filters. Frequency selectivity as estimated by auditory filters has been well quantified in humans and other mammalian species using behavioral and physiological methodologies, but little work has been done to examine frequency selectivity in nonhuman primates. In particular, knowledge of macaque frequency selectivity would help address the recent controversy over the sharpness of cochlear tuning in humans relative to other animal species. The purpose of our study was to investigate the frequency selectivity of macaque monkeys using a notched-noise paradigm. Four macaques were trained to detect tones in noises that were spectrally notched symmetrically and asymmetrically around the tone frequency. Masked tone thresholds decreased with increasing notch width. Auditory filter shapes were estimated using a rounded exponential function. Macaque auditory filters were symmetric at low noise levels and broader and more asymmetric at higher noise levels with broader low-frequency and steeper high-frequency tails. Macaque filter bandwidths (BW3dB) increased with increasing center frequency, similar to humans and other species. Estimates of equivalent rectangular bandwidth (ERB) and filter quality factor (QERB) suggest macaque filters are broader than human filters. These data shed further light on frequency selectivity across species and serve as a baseline for studies of neuronal frequency selectivity and frequency selectivity in subjects with hearing loss.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kPDy0L
via IFTTT

Electrically-evoked auditory steady-state responses as neural correlates of loudness growth in cochlear implant users

S03785955.gif

Publication date: Available online 8 December 2017
Source:Hearing Research
Author(s): Maaike Van Eeckhoutte, Jan Wouters, Tom Francart
Loudness growth functions characterize how the loudness percept changes with current level between the threshold and most comfortable loudness level in cochlear implant users. Even though loudness growth functions are highly listener-dependent, currently default settings are used in clinical devices. This study investigated whether electrically-evoked auditory steady-state response amplitude growth functions correspond to behaviorally measured loudness growth functions. Seven cochlear implant listeners participated in two behavioral loudness growth tasks and an EEG recording session. The 40-Hz sinusoidally-amplitude-modulated pulse trains were presented to CI channels stimulating at a more apical and basal region of the cochlea, and were presented at different current levels encompassing the listeners' dynamic ranges. Behaviorally, loudness growth was measured using an Absolute Magnitude Estimation and a Graphical Rating Scale with loudness categories. A good correspondence was found between the response amplitude functions and the behavioral loudness growth functions. The results are encouraging for future advances in individual, more automatic, and objective fitting of cochlear implants.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iL0eed
via IFTTT

Self reported Hearing Difficulty, Tinnitus, and Normal Audiometric Thresholds, The National Health and Nutrition Examination Survey 1999-2002

S03785955.gif

Publication date: Available online 7 December 2017
Source:Hearing Research
Author(s): Christopher Spankovich, Victoria B. Gonzalez, Dan Su, Charles E. Bishop
Perceived hearing difficulty (HD) and/or tinnitus in the presence of normal audiometric thresholds present a clinical challenge. Yet, there is limited data regarding prevalence and determinant factors contributing to HD. Here we present estimates generalized to the non-institutionalized population of the United States based on the cross-sectional population-based study, the National Health and Nutrition and Examination Survey (NHANES) in 2,176 participants (20-69 years of age). Normal audiometric thresholds were defined by pure-tone average (PTA4) of 0.5, 1.0, 2.0, 4.0 kHz ≤ 25 dBHL in each ear. Hearing difficulty (HD) and tinnitus perception was self-reported. Of the 2,176 participants with complete data, 2,015 had normal audiometric thresholds based on PTA4; the prevalence of individuals with normal PTA4 that self-reported HD was 15%. The percentage of individuals with normal audiometric threshold and persistent tinnitus was 10.6%. Multivariate logistic regression adjusting for age, sex, and hearing thresholds identified the following variables related to increased odds of HD: tinnitus, balance issues, noise exposure, arthritis, vision difficulties, neuropathic symptoms, physical/mental/emotional issues; and for increased odds or reported persistent tinnitus: HD, diabetes, arthritis, vision difficulties, confusion/memory issues, balance issues, noise exposure, high alcohol consumption, neuropathic symptoms and analgesic use. Analyses using an alternative definition of normal hearing, pure-tone thresholds ≤ 25 dBHL at 0.5, 1.0, 2.0, 4.0, 6.0, and 8.0 kHz in each ear, revealed lower prevalence of HD and tinnitus, but comparable multivariate relationships. The findings suggest that prevalence of HD is dependent on how normal hearing is defined and the factors that impact odds of reported HD include tinnitus, noise exposure, mental/cognitive status, and other sensory deficits.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO9By3
via IFTTT

A framework for testing and comparing binaural models

S03785955.gif

Publication date: Available online 28 November 2017
Source:Hearing Research
Author(s): Mathias Dietz, Jean-Hugues Lestang, Piotr Majdak, Richard M. Stern, Torsten Marquardt, Stephan D. Ewert, William M. Hartmann, Dan F.M. Goodman
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.



from #Audiology via ola Kala on Inoreader http://ift.tt/2iOFbHI
via IFTTT

Sustained frontal midline theta enhancements during effortful listening track working memory demands

S03785955.gif

Publication date: Available online 27 November 2017
Source:Hearing Research
Author(s): Matthew G. Wisniewski, Nandini Iyer, Eric R. Thompson, Brian D. Simpson
Recent studies demonstrate that frontal midline theta power (4–8 Hz) enhancements in the electroencephalogram (EEG) relate to effortful listening. It has been proposed that these enhancements reflect working memory demands. Here, the need to retain auditory information in working memory was manipulated in a 2-interval 2-alternative forced-choice delayed pitch discrimination task (“Which interval contained the higher pitch?”). On each trial, two square wave stimuli differing in pitch at an individual's ∼70.7% correct threshold were separated by a 3-second ISI. In a ‘Roving’ condition, the lowest pitch stimulus was randomly selected on each trial (uniform distribution from 840 – 1160 Hz). In a ‘Fixed’ condition, the lowest pitch was always 979 Hz. Critically, the ‘Fixed’ condition allowed one to know the correct response immediately following the first stimulus (e.g., if the first stimulus is 979 Hz, the second must be higher). In contrast, the ‘Roving’ condition required retention of the first tone for comparison to the second. Frontal midline theta enhancements during the ISI were only observed for the ‘Roving’ condition. Alpha (8–13 Hz) enhancements were apparent during the ISI, but did not differ significantly between conditions. Since conditions were matched for accuracy at threshold, results suggest that frontal midline theta enhancements will not always accompany difficult listening. Mixed results in the literature regarding frontal midline theta enhancements may be related to differences between tasks in regards to working memory demands. Alpha enhancements may reflect a task general set of effortful listening processes.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kO4ZYA
via IFTTT

Sweep-tone evoked stimulus frequency otoacoustic emissions in humans: Development of a noise-rejection algorithm and normative features

elsevier-non-solus.png

Publication date: Available online 20 November 2017
Source:Hearing Research
Author(s): Srikanta K. Mishra, Carrick L. Talmadge
In recent years, there has been a growing interest to measure stimulus frequency otoacoustic emissions (SFOAEs) using sweep tones. While there are several advantages of the sweep-tone technique, one of the major problems with sweep-tone methodologies is the lack of an objective analysis procedure that considers and rejects individual noisy recordings or noisy segments. A new efficient data-driven method for rejecting noisy segments in SFOAE analysis is proposed and the normative features of SFOAEs are characterized in fifty normal-hearing young adults. The automated procedure involved phase detrending with a low-order polynomial and application of median and interquartile ranges for data outlier rejection from individual recordings. The SFOAE level and phase were analyzed using the least-squared fit method, and the noise floor was estimated using the error of the mean of the sweep level. Overall, the results of this study demonstrated the effectiveness of the automated noise rejection procedure and described the normative features of sweep-tone evoked SFOAEs in human adults.



from #Audiology via ola Kala on Inoreader http://ift.tt/2kP6XZ0
via IFTTT