Τετάρτη 31 Αυγούστου 2016

Modelling the effect of round window stiffness on residual hearing after cochlear implantation

grey_pxl.gif

Publication date: Available online 30 August 2016
Source:Hearing Research
Author(s): Stephen J. Elliott, Guangjian Ni, Carl A. Verschuur
Preservation of residual hearing after cochlear implantation is now considered an important goal of surgery. However, studies indicate an average post-operative hearing loss of around 20 dB at low frequencies. One factor which may contribute to post-operative hearing loss, but which has received little attention in the literature to date, is the increased stiffness of the round window, due to the physical presence of the cochlear implant, and to its subsequent thickening or to bone growth around it. A finite element model was used to estimate that there is approximately a 100-fold increase in the round window stiffness due to a cochlear implant passing through it. A lumped element model was then developed to study the effects of this change in stiffness on the acoustic response of the cochlea. As the round window stiffness increases, the effects of the cochlear and vestibular aqueducts become more important. An increase of round window stiffness by a factor of 10 is predicted to have little effect on residual hearing, but increasing this stiffness by a factor of 100 reduces the acoustic sensitivity of the cochlea by about 20 dB, below 1 kHz, in reasonable agreement with the observed loss in residual hearing after implantation. It is also shown that the effect of this stiffening could be reduced by incorporating a small gas bubble within the cochlear implant.



from #Audiology via ola Kala on Inoreader http://ift.tt/2bBjABz
via IFTTT

Τρίτη 30 Αυγούστου 2016

New MA Recruits Join the School – See the Photos!

Last Thursday (8/25/2016), SLHS welcomed the new Master’s students at the annual pizza party! Our new recruits for the MA education program in speech-language pathology are excited to join the school!  They were able to mingle with the second year MA students, doctoral students, and faculty, and learn more about life in SLHS.

Welcome to the new recruits!

[See image gallery at slhs.sdsu.edu]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bzIZMU
via IFTTT

Dyslexia Limits the Ability to Categorize Talker Dialect

Purpose
The purpose of this study was to determine whether the underlying phonological impairment in dyslexia is associated with a deficit in categorizing regional dialects.
Method
Twenty adults with dyslexia, 20 school-age children with dyslexia, and 40 corresponding control listeners with average reading ability listened to sentences produced by multiple talkers (both sexes) representing two dialects: Midland dialect in Ohio (same as listeners' dialect) and Southern dialect in Western North Carolina. Participants' responses were analyzed using signal detection theory.
Results
Listeners with dyslexia were less sensitive to talker dialect than listeners with average reading ability. Children were less sensitive to dialect than adults. Under stimulus uncertainty, listeners with average reading ability were biased toward Ohio dialect, whereas listeners with dyslexia were unbiased in their responses. Talker sex interacted with sensitivity and bias differently for listeners with dyslexia than for listeners with average reading ability. The correlations between dialect sensitivity and phonological memory scores were strongest for adults with dyslexia.
Conclusions
The results imply that the phonological deficit in dyslexia arises from impaired access to intact phonological representations rather than from poorly specified representations. It can be presumed that the impeded access to implicit long-term memory representations for indexical (dialect) information is due to less efficient operations in working memory, including deficiencies in utilizing talker normalization processes.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bA2Fu5
via IFTTT

Screening for Language Delay: Growth Trajectories of Language Ability in Low- and High-Performing Children

Purpose
This study investigated the stability and growth of preschool language skills and explores latent class analysis as an approach for identifying children at risk of language impairment.
Method
The authors present data from a large-scale 2-year longitudinal study, in which 600 children were assessed with a language-screening tool (LANGUAGE4) at age 4 years. A subsample (n = 206) was assessed on measures of sentence repetition, vocabulary, and grammatical knowledge at ages 4, 5, and 6 years.
Results
A global latent language factor showed a high degree of longitudinal stability in children between the ages of 4 to 6 years. A low-performing group showing a language deficit compared to their age peers at age 4 was identified on the basis of the LANGUAGE4. The growth-rates during this 2-year time period were parallel for the low-performing and 3 higher performing groups of children.
Conclusions
There is strong stability in children's language skills between the ages of 4 and 6 years. The results demonstrate that a simple language screening measure can successfully identify a low-performing group of children who show persistent language weaknesses between the ages of 4 and 6 years.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bzOjPf
via IFTTT

Different cognitive functions discriminate gait performance in younger and older women: A pilot study

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Joaquin U. Gonzales, C. Roger James, Hyung Suk Yang, Daniel Jensen, Lee Atkins, Brennan J. Thompson, Kareem Al-Khalil, Michael O’Boyle
AimCognitive dysfunction is associated with slower gait speed in older women, but whether cognitive function affects gait performance earlier in life has yet to be investigated. Thus, the objective of this study was to test the hypothesis that cognitive function will discriminate gait performance in healthy younger women.MethodsFast-pace and dual-task gait speed were measured in 30 young to middle-aged (30–45y) and 26 older (61–80y) women without mild cognitive impairment. Visuoperceptual ability, working memory, executive function, and learning ability were assessed using neuropsychological tests. Within each age group, women were divided by the median into lower and higher cognitive function groups to compare gait performance.ResultsYounger women with higher visuoperceptual ability had faster fast-pace (2.25±0.30 vs. 1.98±0.18m/s, p≤0.01) and dual-task gait speed (2.02±0.27 vs. 1.69±0.25m/s, p≤0.01) than women with lower visuoperceptual ability. The difference in dual-task gait speed remained significant (p=0.02) after adjusting for age, years of education, and other covariates. Dividing younger women based on other cognitive domains showed no difference in gait performance. In contrast, working memory and executive function discriminated dual-task gait speed (p<0.05) in older women after adjusting for age and education.ConclusionTo our knowledge, this is the first study to show that poorer cognitive function even at a relatively young age can negatively impact mobility. Different cognitive functions discriminated gait performance based on age, highlighting a possible influence of aging in the relationship between cognitive function and mobility in women.



from #Audiology via ola Kala on Inoreader http://ift.tt/2c8jq1r
via IFTTT

Recovery of endocochlear potential after severe damage to lateral wall fibrocytes following acute cochlear energy failure.

Recovery of endocochlear potential after severe damage to lateral wall fibrocytes following acute cochlear energy failure.

Neuroreport. 2016 Aug 26;

Authors: Kitao K, Mizutari K, Nakagawa S, Matsunaga T, Fukuda S, Fujii M

Abstract
Reduction of endocochlear potential (EP) is one of the main causes of sensorineural hearing loss. In this study, we investigated changes in the EP using a mouse model of acute cochlear energy failure, which comprised severe cochlear lateral wall damage induced by the local administration of 3-nitropropionic acid to the inner ear. We also analyzed the correlation between EP changes and histological findings in the cochlear lateral wall. We detected the recovery of the EP and hearing function at lower frequencies after severe damage of the cochlear lateral wall fibrocytes at the corresponding region. Remodeling of the cochlear lateral wall was associated with EP recovery, including the re-expression of ion transporters or gap junctions (i.e. Na/K/ATPase-β1 and connexin 26). These results indicate a mechanism for late-phase hearing recovery after severe deafness, which is frequently observed in clinical settings.

PMID: 27571432 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2cbjEaY
via IFTTT

Δευτέρα 29 Αυγούστου 2016

Ambulatory activity classification with dendogram-based support vector machine: Application in lower-limb active exoskeleton

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Oishee Mazumder, Ananda Sankar Kundu, Prasanna Kumar Lenka, Subhasis Bhaumik
Ambulatory activity classification is an active area of research for controlling and monitoring state initiation, termination, and transition in mobility assistive devices such as lower-limb exoskeletons. State transition of lower-limb exoskeletons reported thus far are achieved mostly through the use of manual switches or state machine-based logic. In this paper, we propose a postural activity classifier using a ‘dendogram-based support vector machine’ (DSVM) which can be used to control a lower-limb exoskeleton.A pressure sensor-based wearable insole and two six-axis inertial measurement units (IMU) have been used for recognising two static and seven dynamic postural activities: sit, stand, and sit-to-stand, stand-to-sit, level walk, fast walk, slope walk, stair ascent and stair descent. Most of the ambulatory activities are periodic in nature and have unique patterns of response. The proposed classification algorithm involves the recognition of activity patterns on the basis of the periodic shape of trajectories. Polynomial coefficients extracted from the hip angle trajectory and the centre-of-pressure (CoP) trajectory during an activity cycle are used as features to classify dynamic activities.The novelty of this paper lies in finding suitable instrumentation, developing post-processing techniques, and selecting shape-based features for ambulatory activity classification. The proposed activity classifier is used to identify the activity states of a lower-limb exoskeleton. The DSVM classifier algorithm achieved an overall classification accuracy of 95.2%.



from #Audiology via ola Kala on Inoreader http://ift.tt/2bLXsTz
via IFTTT

Quantifying intra-limb coordination in walking of healthy children aged three to six

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Mingyu Hu, Nan Zhou, Bo Xu, Wuyong Chen, Jianxin Wu, Jin Zhou
The aim of this study was first to quantify intra-limb coordination and then to explore the gender differences of 180 healthy children aged 3–6. The children's joint Euler angles and angular velocities were measured and were used to calculate the phase angle (PA) and continuous relative phase (CRP). Firstly, a portrait of the mean and standard deviation (SD) of PA and CRP was applied to quantify coordination in the knees and ankles; then five key events in walking were selected and their inter-age differences were assessed by One-way ANOVA. Finally, gender differences were evaluated by GLM-Univariate. The significance level was 0.05 and confidence interval was 95%. Our results show that similar portraits of PA and CRP were found for knees and ankles from age 3–6; SD demonstrated that the PA and CRP in the knees and ankles were consistent with the increase in age. Moreover, θ_CRP(K-A) demonstrated that the direction reversal in heels off of those two joints in children aged 5 and 6 was earlier than those of age 3 and 4 and no inter-age significant differences were recorded for PA and CRP in either gait event. Finally, gender differences exist before the age of six, particularly in the transition period such as heel contact, toe off and during the mid swing. Overall, although further development such as gait control or balance is still improving, basic principle of intra-limb coordination has formed by the age of 3 and gender differences already existed before age of 6.



from #Audiology via ola Kala on Inoreader http://ift.tt/2c3Iyq0
via IFTTT

Low-frequency bias tone suppression of auditory-nerve responses to low-level clicks and tones

alertIcon.gif

Publication date: November 2016
Source:Hearing Research, Volume 341
Author(s): Hui Nam, John J. Guinan
We used low-frequency “bias” tones (BT's) to explore whether click and tone responses are affected in the same way by cochlear active processes. In nonlinear systems the responses to clicks are not always simply related to the responses to tones. Cochlear amplifier gain depends on the incremental slope of the outer-hair-cell (OHC) stereocilia mechano-electric transduction (MET) function. BTs transiently change the operating-point of OHC MET channels and can suppress cochlear-amplifier gain by pushing OHC METs into low-slope saturation regions. BT effects on single auditory-nerve (AN) fibers have been studied on tone responses but not on click responses. We recorded from AN fibers in anesthetized cats and compared tone and click responses using 50 Hz BTs at 70–120 dB SPL to manipulate OHC stereocilia position. BTs can also excite and thereby obscure the BT suppression. We measured AN-fiber response synchrony to BTs alone so that we could exclude suppression measurements when the BT synchrony might obscure the suppression. BT suppression of low-level tone and click responses followed the traditional pattern of twice-a-BT-cycle suppression with more suppression at one phase than the other. The major suppression phases of most fibers were tightly grouped with little difference between click and tone suppressions, which is consistent with low-level click and tone responses being amplified in the same way. The data are also consistent with the operating point of the OHC MET function varying smoothly from symmetric in the base to offset in the apex, and, in contrast, with the IHC MET function being offset throughout the cochlea. As previously reported, bias-tones presented alone excited AN fibers at one or more phases, a phenomena termed “peak splitting” with most BT excitation phases ∼¼ cycle before or after the major suppression phase. We explain peak splitting as being due to distortion in multiple fluid drives to inner-hair-cell stereocilia.



from #Audiology via ola Kala on Inoreader http://ift.tt/2c2VBrR
via IFTTT

Κυριακή 28 Αυγούστου 2016

Temporal and spatial gait parameters in children with Cri du Chat Syndrome under single and dual task conditions

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Laurel D. Abbruzzese, Rachel Salazar, Maddie Aubuchon, Ashwini K. Rao
AimTo describe temporal and spatial gait characteristics in individuals with Cri du Chat syndrome (CdCS) and to explore the effects of performing concurrent manual tasks while walking.MethodsThe gait parameters of 14 participants with CdCS (mean age 10.3, range 3–20 years) and 14 age-matched controls (mean age 10.1, range 3–20 years) were collected using the GAITRite® instrumented walkway. All participants first walked without any concurrent tasks and then performed 2 motor dual task walking conditions (pitcher and tray).ResultsIndividuals with CdCS took more frequent, smaller steps than controls, but, on average, had a comparable gait speed. In addition, there was a significant task by group interaction. Participants decreased gait speed, decreased cadence, decreased step length, and increased% time in double limb support under dual task conditions compared to single task conditions. However, the age-matched controls altered their gait for both manual tasks, and the participants with CdCS only altered their gait for the tray task.InterpretationAlthough individuals with CdCS ambulate with a comparable gait speed to age-matched controls under single task conditions, they did not significantly alter their gait when carrying a pitcher with a cup of water inside, like controls. It is not clear whether or not individuals with CdCS had difficulty attending to task demands or had difficulty modifying their gait.



from #Audiology via ola Kala on Inoreader http://ift.tt/2cjBM3y
via IFTTT

Reduced knee adduction moments for management of knee osteoarthritis:

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Ryan T. Lewinson, Isabelle A. Vallerand, Kelsey H. Collins, J. Preston Wiley, Victor M.Y. Lun, Chirag Patel, Linda J. Woodhouse, Raylene A. Reimer, Jay T. Worobets, Walter Herzog, Darren J. Stefanyshyn
Wedged insoles are believed to be of clinical benefit to individuals with knee osteoarthritis by reducing the knee adduction moment (KAM) during gait. However, previous clinical trials have not specifically controlled for KAM reduction at baseline, thus it is unknown if reduced KAMs actually confer a clinical benefit. Forty-eight participants with medial knee osteoarthritis were randomly assigned to either a control group where no footwear intervention was given, or a wedged insole group where KAM reduction was confirmed at baseline. KAMs, Knee Injury and Osteoarthritis Outcome Score (KOOS) and Physical Activity Scale for the Elderly (PASE) scores were measured at baseline. KOOS and PASE surveys were re-administered at three months follow-up. The wedged insole group did not experience a statistically significant or clinically meaningful change in KOOS pain over three months (p=0.173). Furthermore, there was no association between change in KAM magnitude and change in KOOS pain over three months within the wedged insole group (R2=0.02, p=0.595). Improvement in KOOS pain for the wedged insole group was associated with worse baseline pain, and a change in PASE score over the three month study (R2=0.57, p=0.007). As an exploratory comparison, there was no significant difference in change in KOOS pain (p=0.49) between the insole and control group over three months. These results suggest that reduced KAMs do not appear to provide any clinical benefit compared to no intervention over a follow-up period of three months. ClinicalTrials.gov ID Number: NCT02067208



from #Audiology via ola Kala on Inoreader http://ift.tt/2buxkZA
via IFTTT

Decreased high-frequency center-of-pressure complexity in recently concussed asymptomatic athletes

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Peter C. Fino, Maury A. Nussbaum, Per Gunnar Brolinson
Two experiments compared multiple methods of estimating postural stability entropy to address: 1) if postural complexity differences exist between concussed and healthy athletes immediately following return-to-play; 2) which methods best detect such differences; and 3) what is an appropriate interpretation of such differences. First, center of pressure (COP) data were collected from six concussed athletes over the six weeks immediately following their concussion and from 24 healthy athletes. Second, 25 healthy non-athletes performed four quiet standing tasks: normal, co-contracting their lower extremity muscles, performing a cognitive arithmetic task, and voluntarily manipulating their sway. Postural complexity was calculated using approximate, sample, multi-variate sample, and multi-variate composite multi-scale (MV-CompMSE) entropy methods for both high-pass filtered and low-pass filtered COP data. MV-CompMSE of the high-pass filtered COP signal identified the most consistent differences between groups, with concussed athletes exhibiting less complexity over the high frequency COP time-series. Among healthy non-athletes, high-pass filtered MV-CompMSE increased only in the co-contraction condition, suggesting the decrease in high frequency MV-CompMSE found in concussed athletes may be due to more relaxed muscles or less complex muscle contractions. This decrease in entropy may associate with reported increases in intra-cortical inhibition. Furthermore, a single-case study suggested high frequency MV-CompMSE may be a useful clinical tool for concussion management.



from #Audiology via ola Kala on Inoreader http://ift.tt/2cjCaz9
via IFTTT

Toddlers actively reorganize their whole body coordination to maintain walking stability while carrying an object

Publication date: October 2016
Source:Gait & Posture, Volume 50
Author(s): Wen-Hao Hsu, Daniel L. Miranda, Trevor L. Chistolini, Eugene C. Goldfield
Balanced walking involves freely swinging the limbs like pendula. However, children immediately begin to carry objects as soon as they can walk. One possibility for this early skill development is that whole body coordination during walking may be re-organized into loosely coupled collections of body parts, allowing children to use their arms to perform one function, while the legs perform another. Therefore, this study examines: 1) how carrying an object affects the coordination of the arms and legs during walking, and 2) if carrying an object influences stride length and width. Ten healthy toddlers with 3–12 months of walking experience were recruited to walk barefoot while carrying or not carrying a small toy. Stride length, width, speed, and continuous relative phase (CRP) of the hips and of the shoulders were compared between carrying conditions. While both arms and legs demonstrated destabilization and stabilization throughout the gait cycle, the arms showed a reduction in intra-subject coordination variability in response to carrying an object. Carrying an object may modify the function of the arms from swinging for balance to maintaining hold of an object. The observed period-dependent changes of the inter-limb coordination of the hips and of the shoulders also support this interpretation. Overall, these findings support the view that whole-body coordination patterns may become partitioned in particular ways as a function of task requirements.



from #Audiology via ola Kala on Inoreader http://ift.tt/2buwTyp
via IFTTT

Stereocilia morphogenesis and maintenance through regulation of actin stability.

Stereocilia morphogenesis and maintenance through regulation of actin stability.

Semin Cell Dev Biol. 2016 Aug 23;

Authors: McGrath J, Roy P, Perrin BJ

Abstract
Stereocilia are actin-based protrusions on auditory and vestibular sensory cells that are required for hearing and balance. They convert physical force from sound, head movement or gravity into an electrical signal, a process that is called mechanoelectrical transduction. This function depends on the ability of sensory cells to grow stereocilia of defined lengths. These protrusions form a bundle with a highly precise geometry that is required to detect nanoscale movements encountered in the inner ear. Congenital or progressive stereocilia degeneration causes hearing loss. Thus, understanding stereocilia hair bundle structure, development, and maintenance is pivotal to understanding the pathogenesis of deafness. Stereocilia cores are made from a tightly packed array of parallel, crosslinked actin filaments, the length and stability of which are regulated in part by myosin motors, actin crosslinkers and capping proteins. This review aims to describe stereocilia actin regulation in the context of an emerging "tip turnover" model where actin assembles and disassembles at stereocilia tips while the remainder of the core is exceptionally stable.

PMID: 27565685 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bAbp5z
via IFTTT

Σάββατο 27 Αυγούστου 2016

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Int J Audiol. 2016 Aug 26;:1-8

Authors: Mao Y, Xu L

Abstract
OBJECTIVE: The purpose of the present study was to investigate Mandarin tone recognition in background noise in children with cochlear implants (CIs), and to examine the potential factors contributing to their performance.
DESIGN: Tone recognition was tested using a two-alternative forced-choice paradigm in various signal-to-noise ratio (SNR) conditions (i.e. quiet, +12, +6, 0, and -6 dB). Linear correlation analysis was performed to examine possible relationships between the tone-recognition performance of the CI children and the demographic factors.
STUDY SAMPLE: Sixty-six prelingually deafened children with CIs and 52 normal-hearing (NH) children as controls participated in the study.
RESULTS: Children with CIs showed an overall poorer tone-recognition performance and were more susceptible to noise than their NH peers. Tone confusions between Mandarin tone 2 and tone 3 were most prominent in both CI and NH children except for in the poorest SNR conditions. Age at implantation was significantly correlated with tone-recognition performance of the CI children in noise.
CONCLUSIONS: There is a marked deficit in tone recognition in prelingually deafened children with CIs, particularly in noise listening conditions. While factors that contribute to the large individual differences are still elusive, early implantation could be beneficial to tone development in pediatric CI users.

PMID: 27564095 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bHKTZr
via IFTTT

Corrigendum.

Corrigendum.

Int J Audiol. 2016 Aug 26;:1

Authors:

PMID: 27561903 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bzj6bV
via IFTTT

Semantic Processing in Deaf and Hard-of-Hearing Children: Large N400 Mismatch Effects in Brain Responses, Despite Poor Semantic Ability.

Semantic Processing in Deaf and Hard-of-Hearing Children: Large N400 Mismatch Effects in Brain Responses, Despite Poor Semantic Ability.

Front Psychol. 2016;7:1146

Authors: Kallioinen P, Olofsson J, Nakeva von Mentzer C, Lindgren M, Ors M, Sahlén BS, Lyxell B, Engström E, Uhlén I

Abstract
Difficulties in auditory and phonological processing affect semantic processing in speech comprehension for deaf and hard-of-hearing (DHH) children. However, little is known about brain responses related to semantic processing in this group. We investigated event-related potentials (ERPs) in DHH children with cochlear implants (CIs) and/or hearing aids (HAs), and in normally hearing controls (NH). We used a semantic priming task with spoken word primes followed by picture targets. In both DHH children and controls, cortical response differences between matching and mismatching targets revealed a typical N400 effect associated with semantic processing. Children with CI had the largest mismatch response despite poor semantic abilities overall; Children with CI also had the largest ERP differentiation between mismatch types, with small effects in within-category mismatch trials (target from same category as prime) and large effects in between-category mismatch trials (where target is from a different category than prime), compared to matching trials. Children with NH and HA had similar responses to both mismatch types. While the large and differentiated ERP responses in the CI group were unexpected and should be interpreted with caution, the results could reflect less precision in semantic processing among children with CI, or a stronger reliance on predictive processing.

PMID: 27559320 [PubMed]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bWMRmP
via IFTTT

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Lexical tone recognition in noise in normal-hearing children and prelingually deafened children with cochlear implants.

Int J Audiol. 2016 Aug 26;:1-8

Authors: Mao Y, Xu L

Abstract
OBJECTIVE: The purpose of the present study was to investigate Mandarin tone recognition in background noise in children with cochlear implants (CIs), and to examine the potential factors contributing to their performance.
DESIGN: Tone recognition was tested using a two-alternative forced-choice paradigm in various signal-to-noise ratio (SNR) conditions (i.e. quiet, +12, +6, 0, and -6 dB). Linear correlation analysis was performed to examine possible relationships between the tone-recognition performance of the CI children and the demographic factors.
STUDY SAMPLE: Sixty-six prelingually deafened children with CIs and 52 normal-hearing (NH) children as controls participated in the study.
RESULTS: Children with CIs showed an overall poorer tone-recognition performance and were more susceptible to noise than their NH peers. Tone confusions between Mandarin tone 2 and tone 3 were most prominent in both CI and NH children except for in the poorest SNR conditions. Age at implantation was significantly correlated with tone-recognition performance of the CI children in noise.
CONCLUSIONS: There is a marked deficit in tone recognition in prelingually deafened children with CIs, particularly in noise listening conditions. While factors that contribute to the large individual differences are still elusive, early implantation could be beneficial to tone development in pediatric CI users.

PMID: 27564095 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bHKTZr
via IFTTT

Corrigendum.

Corrigendum.

Int J Audiol. 2016 Aug 26;:1

Authors:

PMID: 27561903 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bzj6bV
via IFTTT

Some Interesting Facts about the Journal of the American Academy of Audiology



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBhOd
via IFTTT

Do Modern Hearing Aids Meet ANSI Standards?

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciDMt0
via IFTTT

Directional Processing and Noise Reduction in Hearing Aids: Individual and Situational Influences on Preferred Setting

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBbGz
via IFTTT

A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCPAX
via IFTTT

Manganese and Lipoflavonoid Plus® to Treat Tinnitus: A Randomized Controlled Trial

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBTUe
via IFTTT

Motivational Interviewing as an Adjunct to Hearing Rehabilitation for Patients with Tinnitus: A Randomized Controlled Pilot Trial

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCApm
via IFTTT

Validity and Reliability of the Hearing Handicap Inventory for Elderly: Version Adapted for Use on the Portuguese Population

jaaa.gif



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCG08
via IFTTT

Response to Dr. Vermiglio



from #Audiology via ola Kala on Inoreader http://ift.tt/2btBfpz
via IFTTT

JAAA CEU Program



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciDHFE
via IFTTT

JAAA CEU Program.

Related Articles

JAAA CEU Program.

J Am Acad Audiol. 2016 Sep;27(8):684-685

Authors:

PMID: 27564447 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciDQsR
via IFTTT

Response to Dr. Vermiglio.

Related Articles

Response to Dr. Vermiglio.

J Am Acad Audiol. 2016 Sep;27(8):683

Authors: Jerger J

PMID: 27564446 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bX2Dhx
via IFTTT

Validity and Reliability of the Hearing Handicap Inventory for Elderly: Version Adapted for Use on the Portuguese Population.

Related Articles

Validity and Reliability of the Hearing Handicap Inventory for Elderly: Version Adapted for Use on the Portuguese Population.

J Am Acad Audiol. 2016 Sep;27(8):677-682

Authors: de Paiva SM, Simões J, Paiva A, Newman C, Castro E Sousa F, Bébéar JP

Abstract
BACKGROUND: The use of the Hearing Handicap Inventory for the Elderly (HHIE) questionnaire enables us to measure self-perceived psychosocial handicaps of hearing impairment in the elderly as a supplement to pure-tone audiometry. This screening instrument is widely used and it has been going through adaptations and validations for many languages; all of these versions have kept the validity and reliability of the original version.
PURPOSE: To validate the HHIE questionnaire, translated into Portuguese of Portugal, on the Portuguese population.
RESEARCH DESIGN: This study is a descriptive correlational qualitative study. The authors performed the translation from English into Portuguese, the linguistic adaptation, and the counter translation.
STUDY SAMPLE: Two hundred and sixty patients from the Ear, Nose, and Throat (ENT) Department of Coimbra University Hospitals were divided into a case group (83 individuals) and a control group (177 individuals).
INTERVENTION: All of the 260 patients completed the 25 items in the questionnaire and the answers were reviewed for completeness.
DATA COLLECTION AND ANALYSIS: The patients volunteered to answer the 25-item HHIE during an ENT appointment. Correlations between each individual item and the total score of the HHIE were tested, and demographic and clinical variables were correlated with the total score, as well. The instrument's reproducibility was assessed using the internal consistency model (Cronbach's alpha).
RESULTS: The questions were successfully understood by the participants. There was a significant difference in the HHIE-10 and HHIE-25 total scores between the two groups (p < 0.001). Positive correlations can be seen between the global question and HHIE-10 and HHIE-25. In the regression study, a relationship was observed between the pure-tone average and the HHIE-10 (p < 0.001). Reliability of the instrument was proven by a Cronbach alpha index of 0,79.
CONCLUSIONS: The HHIE translation into Portuguese of Portugal maintained the validity of the original version and it is useful to assess the psychosocial handicap of hearing impairment in the elderly.

PMID: 27564445 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciD7I5
via IFTTT

Motivational Interviewing as an Adjunct to Hearing Rehabilitation for Patients with Tinnitus: A Randomized Controlled Pilot Trial.

Related Articles

Motivational Interviewing as an Adjunct to Hearing Rehabilitation for Patients with Tinnitus: A Randomized Controlled Pilot Trial.

J Am Acad Audiol. 2016 Sep;27(8):669-676

Authors: Zarenoe R, Söderlund LL, Andersson G, Ledin T

Abstract
PURPOSE: To test the effects of a brief motivational interviewing (MI) program as an adjunct to hearing aid rehabilitation for patients with tinnitus and sensorineural hearing loss.
RESEARCH DESIGN: This was a pilot randomized controlled trial.
STUDY SAMPLE: The sample consisted of 50 patients aged between 40 and 82 yr with both tinnitus and sensorineural hearing loss and a pure-tone average (0.5, 1, 2, and 4 kHz) < 70 dB HL. All patients were first-time hearing aid users.
INTERVENTION: A brief MI program was used during hearing aid fitting in 25 patients, whereas the remainder received standard practice (SP), with conventional hearing rehabilitation.
DATA COLLECTION AND ANALYSIS: A total of 46 patients (N = 23 + 23) with tinnitus were included for further analysis. The Tinnitus Handicap Inventory (THI) and the International Outcome Inventory for Hearing Aids (IOI-HA) were administered before and after rehabilitation. THI was used to investigate changes in tinnitus annoyance, and the IOI-HA was used to determine the effect of hearing aid treatment.
RESULTS: Self-reported tinnitus disability (THI) decreased significantly in the MI group (p < 0.001) and in the SP group (p < 0.006). However, there was greater improvement in the MI group (p < 0.013). Furthermore, the findings showed a significant improvement in patients' satisfaction concerning the hearing aids (IOI-HA, within both groups; MI group, p < 0.038; and SP group, p < 0.026), with no difference between the groups (p < 0.99).
CONCLUSION: Tinnitus handicap scores decrease to a greater extent following brief MI than following SP. Future research on the value of incorporating MI into audiological rehabilitation using randomized controlled designs is required.

PMID: 27564444 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciCvlJ
via IFTTT

Manganese and Lipoflavonoid Plus(®) to Treat Tinnitus: A Randomized Controlled Trial.

Related Articles

Manganese and Lipoflavonoid Plus(®) to Treat Tinnitus: A Randomized Controlled Trial.

J Am Acad Audiol. 2016 Sep;27(8):661-668

Authors: Rojas-Roncancio E, Tyler R, Jun HJ, Wang TC, Ji H, Coelho C, Witt S, Hansen MR, Gantz BJ

Abstract
BACKGROUND: Several tinnitus sufferers suggest that manganese has been helpful with their tinnitus.
PURPOSE: We tested this in a controlled experiment where participants were committed to taking manganese and Lipoflavonoid Plus(®) to treat their tinnitus.
RESEARCH DESIGN: Randomized controlled trial.
STUDY SAMPLE: 40 participants were randomized to receive both manganese and Lipoflavonoid Plus(®) for 6 months, or Lipoflavonoid Plus(®) only (as the control).
DATA COLLECTION AND ANALYSIS: Pre- and postmeasures were obtained with the Tinnitus Handicap Questionnaire, Tinnitus Primary Functions Questionnaire, and tinnitus loudness and annoyance ratings. An audiologist performed the audiogram, the tinnitus loudness match, and minimal masking level.
RESULTS: Twelve participants were dropped out of the study because of the side effects or were lost to follow-up. In the manganese group, 1 participant (out of 12) showed a decrease in the questionnaires, and another showed a decrease in the loudness and annoyance ratings. No participants from the control group (total 16) showed a decrease in the questionnaires ratings. Two participants in the control group reported a loudness decrement and one reported an annoyance decrement.
CONCLUSIONS: We were not able to conclude that either manganese or Lipoflavonoid Plus(®) is an effective treatment for tinnitus.

PMID: 27564443 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ciATs9
via IFTTT

A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists.

Related Articles

A Sequential Sentence Paradigm Using Revised PRESTO Sentence Lists.

J Am Acad Audiol. 2016 Sep;27(8):647-660

Authors: Plotkowski AR, Alexander JM

Abstract
BACKGROUND: Listening in challenging situations requires explicit cognitive resources to decode and process speech. Traditional speech recognition tests are limited in documenting this cognitive effort, which may differ greatly between individuals or listening conditions despite similar scores. A sequential sentence paradigm was designed to be more sensitive to individual differences in demands on verbal processing during speech recognition.
PURPOSE: The purpose of this study was to establish the feasibility, validity, and equivalency of test materials in the sequential sentence paradigm as well as to evaluate the effects of masker type, signal-to-noise ratio (SNR), and working memory (WM) capacity on performance in the task.
RESEARCH DESIGN: Listeners heard a pair of sentences and repeated aloud the second sentence (immediate recall) and then wrote down the first sentence (delayed recall). Sentence lists were from the Perceptually Robust English Sentence Test Open-set (PRESTO) test. In experiment I, listeners completed a traditional speech recognition task. In experiment II, listeners completed the sequential sentence task at one SNR. In experiment III, the masker type (steady noise versus multitalker babble) and SNR were varied to demonstrate the effects of WM as the speech material increased in difficulty.
STUDY SAMPLE: Young, normal-hearing adults (total n = 53) from the Purdue University community completed one of the three experiments.
DATA COLLECTION AND ANALYSIS: Keyword scoring of the PRESTO lists was completed for both the immediate- and delayed-recall sentences. The Verbal Letter Monitoring task, a test of WM, was used to separate listeners into a low-WM or high-WM group.
RESULTS: Experiment I indicated that mean recognition on the single-sentence task was highly variable between the original PRESTO lists. Modest rearrangement of the sentences yielded 18 statistically equivalent lists (mean recognition = 65.0%, range = 64.4-65.7%), which were used in the sequential sentence task in experiment II. In the new test paradigm, recognition of the immediate-recall sentences was not statistically different from the single-sentence task, indicating that there were no cognitive load effects from the delayed-recall sentences. Finally, experiment III indicated that multitalker babble was equally detrimental compared to steady-state noise for immediate recall of sentences for both low- and high-WM groups. On the other hand, delayed recall of sentences in multitalker babble was disproportionately more difficult for the low-WM group compared with the high-WM group.
CONCLUSIONS: The sequential sentence paradigm is a feasible test format with mostly equivalent lists. Future studies using this paradigm may need to consider individual differences in WM to see the full range of effects across different conditions. Possible applications include testing the efficacy of various signal-processing techniques in clinical populations.

PMID: 27564442 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOHo2B
via IFTTT

Directional Processing and Noise Reduction in Hearing Aids: Individual and Situational Influences on Preferred Setting.

Related Articles

Directional Processing and Noise Reduction in Hearing Aids: Individual and Situational Influences on Preferred Setting.

J Am Acad Audiol. 2016 Sep;27(8):628-646

Authors: Neher T, Wagener KC, Fischer RL

Abstract
BACKGROUND: A better understanding of individual differences in hearing aid (HA) outcome is a prerequisite for more personalized HA fittings. Currently, knowledge of how different user factors relate to response to directional processing (DIR) and noise reduction (NR) is sparse.
PURPOSE: To extend a recent study linking preference for DIR and NR to pure-tone average hearing thresholds (PTA) and cognitive factors by investigating if (1) equivalent links exist for different types of DIR and NR, (2) self-reported noise sensitivity and personality can account for additional variability in preferred DIR and NR settings, and (3) spatial target speech configuration interacts with individual DIR preference.
RESEARCH DESIGN: Using a correlational study design, overall preference for different combinations of DIR and NR programmed into a commercial HA was assessed in a complex speech-in-noise situation and related to PTA, cognitive function, and different personality traits.
STUDY SAMPLE: Sixty experienced HA users aged 60-82 yr with controlled variation in PTA and working memory capacity took part in this study. All of them had participated in the earlier study, as part of which they were tested on a measure of "executive control" tapping into cognitive functions such as working memory, mental flexibility, and selective attention.
DATA COLLECTION AND ANALYSIS: Six HA settings based on unilateral (within-device) or bilateral (across-device) DIR combined with inactive, moderate, or strong single-microphone NR were programmed into a pair of behind-the-ear HAs together with individually prescribed amplification. Overall preference was assessed using a free-field simulation of a busy cafeteria situation with either a single frontal talker or two talkers at ±30° azimuth as the target speech. In addition, two questionnaires targeting noise sensitivity and the "Big Five" personality traits were administered. Data were analyzed using multiple regression analyses and repeated-measures analyses of variance with a focus on potential interactions between the HA settings and user factors.
RESULTS: Consistent with the earlier study, preferred HA setting was related to PTA and executive control. However, effects were weaker this time. Noise sensitivity and personality did not interact with HA settings. As expected, spatial target speech configuration influenced preference, with bilateral and unilateral DIR "winning" in the single- and two-talker scenario, respectively. In general, participants with higher PTA tended to more strongly prefer bilateral DIR than participants with lower PTA.
CONCLUSIONS: Although the current study lends some support to the view that PTA and cognitive factors affect preferred DIR and NR setting, it also indicates that these effects can vary across noise management technologies. To facilitate more personalized HA fittings, future research should investigate the source of this variability.

PMID: 27564441 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOGyTE
via IFTTT

Do Modern Hearing Aids Meet ANSI Standards?

Related Articles

Do Modern Hearing Aids Meet ANSI Standards?

J Am Acad Audiol. 2016 Sep;27(8):619-627

Authors: Holder JT, Picou EM, Gruenwald JM, Ricketts TA

Abstract
BACKGROUND: The American National Standards Institute (ANSI) provides standards used to govern standardization of all hearing aids. If hearing aids do not meet specifications, there are potential negative implications for hearing aid users, professionals, and the industry. Recent literature has not investigated the proportion of new hearing aids in compliance with the ANSI specifications for quality control standards when they arrive in the clinic before dispensing.
PURPOSE: The aims of this study were to determine the percentage of new hearing aids compliant with the relevant ANSI standard and to report trends in electroacoustic analysis data.
RESEARCH DESIGN: New hearing aids were evaluated for quality control via the ANSI S3.22-2009 standard. In addition, quality control of directional processing was also assessed.
STUDY SAMPLE: Seventy-three behind-the-ear hearing aids from four major manufacturers, that were purchased for clinical patients were evaluated before dispensing.
DATA COLLECTION AND ANALYSIS: Audioscan Verifit (version 3.1) hearing instrument fitting system was used to complete electroacoustic analysis and directional processing evaluation of the hearing aids. Frye's Fonix 8000 test box system (Fonix 8000) was also used to cross-check equivalent input noise (EIN) measurements. These measurements were then analyzed for trends across brands and specifications.
RESULTS: All of the hearing aids evaluated were found to be out of specification for at least one measure. EIN and attack and release times were the measures most frequently out of specification. EIN was found to be affected by test box isolation for two of the four brands tested. Systematic discrepancies accounted for ∼93% of the noncompliance issues, while unsystematic quality control issues accounted for the remaining 7%.
CONCLUSIONS: The high number of systematic discrepancies between the data collected and the specifications published by the manufacturers suggests there are clear issues related to the specific protocols used for quality control testing. These issues present a significant barrier for hearing aid dispensers when attempting to accurately determine if a hearing aid is functioning appropriately. The significant number of unsystematic discrepancies supports the continued importance of quality control measures of new and repaired hearing aids to ensure that the device is functioning properly before it is dispensed and to avoid future negative implications of fitting a faulty device.

PMID: 27564440 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOGTpA
via IFTTT

Some Interesting Facts about the Journal of the American Academy of Audiology.

Related Articles

Some Interesting Facts about the Journal of the American Academy of Audiology.

J Am Acad Audiol. 2016 Sep;27(8):618

Authors: McCaslin DL

PMID: 27564439 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2bOJaRg
via IFTTT

Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants

Purpose
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs.
Methods
Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli.
Results
Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available.
Conclusions
The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

from #Audiology via ola Kala on Inoreader http://ift.tt/29Z4ips
via IFTTT

Emotional Diathesis, Emotional Stress, and Childhood Stuttering

Purpose
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal).
Method
Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips.
Results
CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips.
Conclusions
Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.

from #Audiology via ola Kala on Inoreader http://ift.tt/28QFXxn
via IFTTT

Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

Purpose
The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks.
Method
We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment.
Results
In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect.
Conclusions
The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aQ5ydF
via IFTTT

Clear Speech Variants: An Acoustic Study in Parkinson's Disease

Purpose
The authors investigated how different variants of clear speech affect segmental and suprasegmental acoustic measures of speech in speakers with Parkinson's disease and a healthy control group.
Method
A total of 14 participants with Parkinson's disease and 14 control participants served as speakers. Each speaker produced 18 different sentences selected from the Sentence Intelligibility Test (Yorkston & Beukelman, 1996). All speakers produced stimuli in 4 speaking conditions (habitual, clear, overenunciate, and hearing impaired). Segmental acoustic measures included vowel space area and first moment (M1) coefficient difference measures for consonant pairs. Second formant slope of diphthongs and measures of vowel and fricative durations were also obtained. Suprasegmental measures included fundamental frequency, sound pressure level, and articulation rate.
Results
For the majority of adjustments, all variants of clear speech instruction differed from the habitual condition. The overenunciate condition elicited the greatest magnitude of change for segmental measures (vowel space area, vowel durations) and the slowest articulation rates. The hearing impaired condition elicited the greatest fricative durations and suprasegmental adjustments (fundamental frequency, sound pressure level).
Conclusions
Findings have implications for a model of speech production for healthy speakers as well as for speakers with dysarthria. Findings also suggest that particular clear speech instructions may target distinct speech subsystems.

from #Audiology via ola Kala on Inoreader http://ift.tt/28T3ph6
via IFTTT

New Directions for Auditory Training: Introduction

Purpose
The purpose of this research forum article is to provide an overview of a collection of invited articles on contemporary issues in auditory training.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bqafrr
via IFTTT

Prevalence and Predictors of Persistent Speech Sound Disorder at Eight Years Old: Findings From a Population Cohort Study

Purpose
The purpose of this study was to determine prevalence and predictors of persistent speech sound disorder (SSD) in children aged 8 years after disregarding children presenting solely with common clinical distortions (i.e., residual errors).
Method
Data from the Avon Longitudinal Study of Parents and Children (Boyd et al., 2012) were used. Children were classified as having persistent SSD on the basis of percentage of consonants correct measures from connected speech samples. Multivariable logistic regression analyses were performed to identify predictors.
Results
The estimated prevalence of persistent SSD was 3.6%. Children with persistent SSD were more likely to be boys and from families who were not homeowners. Early childhood predictors identified as important were weak sucking at 4 weeks, not often combining words at 24 months, limited use of word morphology at 38 months, and being unintelligible to strangers at age 38 months. School-age predictors identified as important were maternal report of difficulty pronouncing certain sounds and hearing impairment at age 7 years, tympanostomy tube insertion at any age up to 8 years, and a history of suspected coordination problems. The contribution of these findings to our understanding of risk factors for persistent SSD and the nature of the condition is considered.
Conclusion
Variables identified as predictive of persistent SSD suggest that factors across motor, cognitive, and linguistic processes may place a child at risk.

from #Audiology via ola Kala on Inoreader http://ift.tt/296H4hB
via IFTTT

Spontaneous Gesture Production and Lexical Abilities in Children With Specific Language Impairment in a Naming Task

Purpose
The purpose of the study was to investigate the role that cospeech gestures play in lexical production in preschool-age children with expressive specific language impairment (E-SLI).
Method
Fifteen preschoolers with E-SLI and 2 groups of typically developing (TD) children matched for chronological age (n = 15, CATD group) and for language abilities (n = 15, LATD group) completed a picture-naming task. The accuracy of the spoken answers (coded for types of correct and incorrect answers), the modality of expression (spoken and/or gestural), types of gestures, and semantic relationship between gestures and speech produced by children in the different groups were compared.
Results
Children with SLI produced higher rates of phonological simplifications and unintelligible answers than CATD children, but lower rates of semantic errors than LATD children. They did not show a significant preference for spoken answers, as TD children did. Similarly to LATD children, they used gestures at higher rates than CATD, both deictic and representational, and both reinforcing the information conveyed in speech and adding correct information to co-occurring speech.
Conclusions
These findings support the hypotheses that children with SLI rely on gestures for scaffolding their speech and do not have a clear preference for the spoken modality, as TD children do, and have implications for educational and clinical practice.

from #Audiology via ola Kala on Inoreader http://ift.tt/2aMZLFE
via IFTTT

Evidence That Bimanual Motor Timing Performance Is Not a Significant Factor in Developmental Stuttering

Purpose
Stuttering involves a breakdown in the speech motor system. We address whether stuttering in its early stage is specific to the speech motor system or whether its impact is observable across motor systems.
Method
As an extension of Olander, Smith, and Zelaznik (2010), we measured bimanual motor timing performance in 115 children: 70 children who stutter (CWS) and 45 children who do not stutter (CWNS). The children repeated the clapping task yearly for up to 5 years. We used a synchronization-continuation rhythmic timing paradigm. Two analyses were completed: a cross-sectional analysis of data from the children in the initial year of the study (ages 4;0 [years;months] to 5;11) compared clapping performance between CWS and CWNS. A second, multiyear analysis assessed clapping behavior across the ages 3;5–9;5 to examine any potential relationship between clapping performance and eventual persistence or recovery of stuttering.
Results
Preschool CWS were not different from CWNS on rates of clapping or variability in interclap interval. In addition, no relationship was found between bimanual motor timing performance and eventual persistence in or recovery from stuttering. The disparity between the present findings for preschoolers and those of Olander et al. (2010) most likely arises from the smaller sample size used in the earlier study.
Conclusion
From the current findings, on the basis of data from relatively large samples of stuttering and nonstuttering children tested over multiple years, we conclude that a bimanual motor timing deficit is not a core feature of early developmental stuttering.

from #Audiology via ola Kala on Inoreader http://ift.tt/29wyc2L
via IFTTT

Initial Stop Voicing in Bilingual Children With Cochlear Implants and Their Typically Developing Peers With Normal Hearing

Purpose
This study focuses on stop voicing differentiation in bilingual children with normal hearing (NH) and their bilingual peers with hearing loss who use cochlear implants (CIs).
Method
Twenty-two bilingual children participated in our study (11 with NH, M age = 5;1 [years;months], and 11 with CIs, M hearing age = 5;1). The groups were matched on hearing age and a range of demographic variables. Single-word picture elicitation was used with word-initial singleton stop consonants. Repeated measures analyses of variance with three within-subject factors (language, stop voicing, and stop place of articulation) and one between-subjects factor (NH vs. CI user) were conducted with voice onset time and percentage of prevoiced stops as dependent variables.
Results
Main effects were statistically significant for language, stop voicing, and stop place of articulation on both voice onset time and prevoicing. There were no significant main effects for NH versus CI groups. Both children with NH and with CIs differentiated stop voicing in their languages and by stop place of articulation. Stop voicing differentiation was commensurate across the groups of children with NH versus CIs.
Conclusions
Stop voicing differentiation is accomplished in a similar fashion by bilingual children with NH and CIs, and both groups differentiate stop voicing in a language-specific fashion.

from #Audiology via ola Kala on Inoreader http://ift.tt/297Iplo
via IFTTT

Auditory Training With Frequent Communication Partners

Purpose
Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)—speech they most likely desire to recognize—under the assumption that familiarity with the FCP's speech limits potential gains. This study determined whether auditory training with the speech of an individual's FCP, in this case their spouse, would lead to enhanced recognition of their spouse's speech.
Method
Ten couples completed a 6-week computerized auditory training program in which the spouse recorded the stimuli and the participant (partner with hearing loss) completed auditory training that presented recordings of their spouse.
Results
Training led participants to better discriminate their FCP's speech. Responses on the Client Oriented Scale of Improvement (Dillon, James, & Ginis, 1997) indicated subjectively that training reduced participants' communication difficulties. Peformance on a word identification task did not change.
Conclusions
Results suggest that auditory training might improve the ability of older participants with hearing loss to recognize the speech of their spouse and might improve communication interactions between couples. The results support a task-appropriate processing framework of learning, which assumes that human learning depends on the degree of similarity between training tasks and desired outcomes.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bqa7Z9
via IFTTT

Παρασκευή 26 Αυγούστου 2016

The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment

imageObjective: The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. Design: In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Results: Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. Conclusions: These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu33Mu
via IFTTT

Association Between Osteoporosis/Osteopenia and Vestibular Dysfunction in South Korean Adults

imageObjective: The associations of osteoporosis/osteopenia with vestibular dysfunction have not been well evaluated and conflicting results have been reported. The purpose of this study is to examine the relation of low bone mineral density (BMD) with vestibular dysfunction. Design: The authors conducted a cross-sectional study in 3579 Korean adults aged 50 years and older who participated in the 2009 to 2010 Korea National Health and Nutrition Examination Survey. BMD was measured by dual energy X ray absorptiometry. Vestibular dysfunction was evaluated using the modified Romberg test of standing balance on firm and compliant support surfaces. Data were analyzed in 2015. Multiple logistic regression analysis was used to compute odds ratios (ORs) and 95% confidence intervals (CIs). Results: The prevalence of vestibular dysfunction was 4.3 ± 0.5%. After adjustment for potential confounders, the adjusted ORs for vestibular dysfunction based on BMD were 1.00 (reference) for normal BMD, 2.21 (95% CI: 1.08, 4.50) for osteopenia, and 2.47 (95% CI: 1.05, 5.81) for osteoporosis (p

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0D0F
via IFTTT

Reflectance Measures from Infant Ears With Normal Hearing and Transient Conductive Hearing Loss

imageObjective: The objective is to develop methods to utilize newborn reflectance measures for the identification of middle-ear transient conditions (e.g., middle-ear fluid) during the newborn period and ultimately during the first few months of life. Transient middle-ear conditions are a suspected source of failure to pass a newborn hearing screening. The ability to identify a conductive loss during the screening procedure could enable the referred ear to be either (1) cleared of a middle-ear condition and recommended for more extensive hearing assessment as soon as possible, or (2) suspected of a transient middle-ear condition, and if desired, be rescreened before more extensive hearing assessment. Design: Reflectance measurements are reported from full-term, healthy, newborn babies in which one ear referred and one ear passed an initial auditory brainstem response newborn hearing screening and a subsequent distortion product otoacoustic emission screening on the same day. These same subjects returned for a detailed follow-up evaluation at age 1 month (range 14 to 35 days). In total, measurements were made on 30 subjects who had a unilateral refer near birth (during their first 2 days of life) and bilateral normal hearing at follow-up (about 1 month old). Three specific comparisons were made: (1) Association of ear’s state with power reflectance near birth (referred versus passed ear), (2) Changes in power reflectance of normal ears between newborn and 1 month old (maturation effects), and (3) Association of ear’s newborn state (referred versus passed) with ear’s power reflectance at 1 month. In addition to these measurements, a set of preliminary data selection criteria were developed to ensure that analyzed data were not corrupted by acoustic leaks and other measurement problems. Results: Within 2 days of birth, the power reflectance measured in newborn ears with transient middle-ear conditions (referred newborn hearing screening and passed hearing assessment at age 1 month) was significantly greater than power reflectance on newborn ears that passed the newborn hearing screening across all frequencies (500 to 6000 Hz). Changes in power reflectance in normal ears from newborn to 1 month appear in approximately the 2000 to 5000 Hz range but are not present at other frequencies. The power reflectance at age 1 month does not depend significantly on the ear’s state near birth (refer or pass hearing screening) for frequencies above 700 Hz; there might be small differences at lower frequencies. Conclusions: Power reflectance measurements are significantly different for ears that pass newborn hearing screening and ears that refer with middle-ear transient conditions. At age 1 month, about 90% of ears that referred at birth passed an auditory brainstem response hearing evaluation; within these ears the power reflectance at 1 month did not differ between the ear that initially referred at birth and the ear that passed the hearing screening at birth for frequencies above 700 Hz. This study also proposes a preliminary set of criteria for determining when reflectance measures on young babies are corrupted by acoustic leaks, probes against the ear canal, or other measurement problems. Specifically proposed are “data selection criteria” that depend on the power reflectance, impedance magnitude, and impedance angle. Additional data collected in the future are needed to improve and test these proposed criteria.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1UVn
via IFTTT

The Physiological Basis and Clinical Use of the Binaural Interaction Component of the Auditory Brainstem Response

imageThe auditory brainstem response (ABR) is a sound-evoked noninvasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, the authors discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. The authors review how interaural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu31Em
via IFTTT

A Novel Algorithm to Derive Spread of Excitation Based on Deconvolution

imageObjective: The width of the spread of excitation (SOE) curve has been widely thought to represent an estimate of SOE. Therefore, correlates between psychophysical parameters, such as pitch discrimination and speech perception, and the width of SOE curves, have long been investigated. However, to date, no relationships between these objective and subjective measurements have yet been determined. In a departure from the current thinking, the authors now propose that the SOE curve, recorded with forward masking, is the equivalent of a convolution operation. As such, deconvolution would be expected to retrieve the excitation areas attributable to either masker or probe, potentially more closely revealing the actual neural SOE. This study aimed to develop a new analytical tool with which to derive SOE using this principle. Design: Intraoperative SOE curve measurements of 16 subjects, implanted with an Advanced Bionics implant, were analyzed. Evoked compound action potential (ECAP)-based SOE curves were recorded on electrodes 3 to 16, using the forward masker paradigm, with variable masker. The measured SOE curves were then compared with predicted SOE curves, built by the convolution of basic excitation density profiles (EDPs). Predicted SOE curves were fitted to the measured SOEs by iterative adjustment of the EDPs for the masker and the probe. Results: It was possible to generate a good fit between the predicted and measured SOE curves, inclusive of their asymmetry. The rectangular EDP was of least value in terms of its ability to generate a good fit; smoother SOE curves were modeled using the exponential or Gaussian EDPs. In most subjects, the EDP width (i.e., the size of the excitation area) gradually changed from wide at the apex of the electrode array, to narrow at the base. A comparison of EDP widths to SOE curve widths, as calculated in the literature, revealed that the EDPs now provide a measure of the SOE that is qualitatively distinct from that provided using conventional methods. Conclusions: This study shows that an eCAP-based SOE curve, measured with forward masking, can be treated as a convolution of EDPs for masker and probe. The poor fit achieved for the measured and modeled data using the rectangular EDP, emphasizes the requirement for a sloping excitation area to mimic actual SOE recordings. Our deconvolution method provides an explanation for the frequently observed asymmetry of SOE curves measured along the electrode array, as this is a consequence of a wider excitation area in the apical part of the cochlea, in the absence of any asymmetry in the actual EDP. In addition, broader apical EDPs underlie the higher eCAP amplitudes found for apical stimulation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1heA
via IFTTT

Intelligibility of the Patient’s Speech Predicts the Likelihood of Cochlear Implant Success in Prelingually Deaf Adults

imageObjectives: The objective of this study was to determine the validity and clinical applicability of intelligibility of the patient’s own speech, measured via a Vowel Identification Test (VOW), as a predictor of speech perception for prelingually deafened adults after 1 year of cochlear implant use. Specifically, the objective was to investigate the probability that a prelingually deaf patient, given a VOW score above (or below) a chosen cutoff point, reaches a postimplant speech perception score above (or below) a critical value. High predictive values for VOW could support preimplant counseling and implant candidacy decisions in individual patients. Design: One hundred and fifty-two adult cochlear implant candidates with prelingual hearing impairment or deafness took part as speakers in a VOW; 149 speakers completed the test successfully. Recordings of the speech stimuli, consisting of nonsense words of the form [h]-V-[t], where V represents one of 15 vowels/diphthongs ([ ]), were presented to two normal-hearing listeners. VOW score was expressed as the percentage of vowels identified correctly (averaged over the 2 listeners). Subsequently, the 149 participants enrolled in the cochlear implant selection procedure. Extremely poor speakers were excluded from implantation, as well as patients who did not meet regular selection criteria as developed for postlingually deafened patients. From the 149 participants, 92 were selected for implantation. For the implanted group, speech perception data were collected at 1-year postimplantation. Results: Speech perception score at 1-year postimplantation (available for 77 of the 92 implanted participants) correlated positively with preimplant intelligibility of the patient’s speech, as represented by VOW (r = 0.79, p

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0uKn
via IFTTT

Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech

imageObjectives: Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. Design: Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners’ ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. Results: Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. Conclusions: Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners’ ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0oCJ
via IFTTT

Human Envelope Following Responses to Amplitude Modulation: Effects of Aging and Modulation Depth

imageObjective: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. Design: Participants: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group (“O1”; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group (“O2”; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. Results: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. Conclusions: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu0CK9
via IFTTT

Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing

imageObjectives: This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Design: Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Results: Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. Conclusions: The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1tdK
via IFTTT

The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users

imageObjective: This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Design: Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. Results: The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. Conclusions: The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

from #Audiology via ola Kala on Inoreader http://ift.tt/2bu1nmz
via IFTTT

Test-Retest Reliability of the Binaural Interaction Component of the Auditory Brainstem Response

imageObjectives: The binaural interaction component (BIC) is the residual auditory brainstem response (ABR) obtained after subtracting the sum of monaurally evoked from binaurally evoked ABRs. The DN1 peak—the first negative peak of the BIC—has been postulated to have diagnostic value as a biomarker for binaural hearing abilities. Indeed, not only do DN1 amplitudes depend systematically upon binaural cues to location (interaural time and level differences), but they are also predictive of central hearing deficits in humans. A prominent issue in using BIC measures as a diagnostic biomarker is that DN1 amplitudes not only exhibit considerable variability across subjects, but also within subjects across different measurement sessions. Design: In this study, the authors investigate the DN1 amplitude measurement reliability by conducting repeated measurements on different days in eight adult guinea pigs. Results: Despite consistent ABR thresholds, ABR and DN1 amplitudes varied between and within subjects across recording sessions. However, the study analysis reveals that DN1 amplitudes varied proportionally with parent monaural ABR amplitudes, suggesting that common experimental factors likely account for the variability in both waveforms. Despite this variability, the authors show that the shape of the dependence between DN1 amplitude and interaural time difference is preserved. The authors then provide a BIC normalization strategy using monaural ABR amplitude that reduces the variability of DN1 peak measurements. Finally, the authors evaluate this normalization strategy in the context of detecting changes of the DN1 amplitude-to-interaural time difference relationship. Conclusions: The study results indicate that the BIC measurement variability can be reduced by a factor of two by performing a simple and objective normalization operation. The authors discuss the potential for this normalized BIC measure as a biomarker for binaural hearing.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0wCl
via IFTTT

Changes in the Compressive Nonlinearity of the Cochlea During Early Aging: Estimates From Distortion OAE Input/Output Functions

imageObjectives: The level-dependent growth of distortion product otoacoustic emissions (DPOAEs) provides an indirect metric of cochlear compressive nonlinearity. Recent evidence suggests that aging reduces nonlinear distortion emissions more than those associated with linear reflection. Therefore, in this study, we generate input/output (I/O) functions from the isolated distortion component of the DPOAE to probe the effects of early aging on the compressive nonlinearity of the cochlea. Design: Thirty adults whose ages ranged from 18 to 64 years participated in this study, forming a continuum of young to middle-age subjects. When necessary for analyses, subjects were divided into a young-adult group with a mean age of 21 years, and a middle-aged group with a mean age of 52 years. All young-adult subjects and 11 of the middle-aged subjects had normal hearing; 4 middle-aged ears had slight audiometric threshold elevation at mid-to-high frequencies. DPOAEs (2f1 − f2) were recorded using primary tones swept upward in frequency from 0.5 to 8 kHz, and varied from 25 to 80 dB sound pressure level. The nonlinear distortion component of the total DPOAE was separated and used to create I/O functions at one-half octave intervals from 1.3 to 7.4 kHz. Four features of OAE compression were extracted from a fit to these functions: compression threshold, range of compression, compression slope, and low-level growth. These values were compared between age groups and correlational analyses were conducted between OAE compression threshold and age with audiometric threshold controlled. Results: Older ears had reduced DPOAE amplitude compared with young-adult ears. The OAE compression threshold was elevated at test frequencies above 2 kHz in the middle-aged subjects by 19 dB (35 versus 54 dB SPL), thereby reducing the compression range. In addition, middle-aged ears showed steeper amplitude growth beyond the compression threshold. Audiometric threshold was initially found to be a confound in establishing the relationship between compression and age; however, statistical analyses allowed us to control its variance. Correlations performed while controlling for age differences in high-frequency audiometric thresholds showed significant relationships between the DPOAE I/O compression threshold and age: Older subjects tended to have elevated compression thresholds compared with younger subjects and an extended range of monotonic growth. Conclusions: Cochlear manifestations of nonlinearity, such as the DPOAE, weaken during early aging, and DPOAE I/O functions become linearized. Commensurate changes in high-frequency audiometric thresholds are not sufficient to fully explain these changes. The results suggest that age-related changes in compressive nonlinearity could produce a reduced dynamic range of hearing, and contribute to perceptual difficulties in older listeners.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu15Mf
via IFTTT

Using the Digits-In-Noise Test to Estimate Age-Related Hearing Loss

imageObjective: Age-related hearing loss is common in the elderly population. Timely detection and targeted counseling can lead to adequate treatment with hearing aids. The Digits-In-Noise (DIN) test was developed as a relatively simple test to assess hearing acuity. It is a potentially powerful test for the screening of large populations, including the elderly. However, until to date, no sensitivity or specificity rates for detecting hearing loss were reported in a general elderly population. The purpose of this study was to evaluate the ability of the DIN test to screen for mild and moderate hearing loss in the elderly. Design: Data of pure-tone audiometry and the DIN test were collected from 3327 adults ages above 50 (mean: 65), as part of the Rotterdam Study, a large population-based cohort study. Sensitivity and specificity of the DIN test for detecting hearing loss were calculated by comparing speech reception threshold (SRT) with pure-tone average threshold at 0.5, 1, 2, and 4 kHz (PTA0.5,1,2,4). Receiver operating characteristics were calculated for detecting >20 and >35 dB HL average hearing loss at the best ear. Results: Hearing loss varied greatly between subjects and, as expected, increased with age. High frequencies and men were more severely affected. A strong correlation (R = 0.80, p

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu1pdT
via IFTTT

Comparing the Accuracy and Speed of Manual and Tracking Methods of Measuring Hearing Thresholds

imageObjectives: The reliability of hearing thresholds obtained using the standard clinical method (modified Hughson-Westlake) has been the focus of previous investigation given the potential for tester bias (Margolis et al., 2015). In recent years, more precise methods in laboratory studies have been used that control for sources of bias, often at the expense of longer test times. The aim of this pilot study was to compare test-retest variability and time requirement to obtain a full set of hearing thresholds (0.125 – 20 kHz) of the clinical modified Hughson-Westlake (manual) method with that of the automated, modified (single frequency) Békésy tracking method (Lee et al., 2012). Design: Hearing thresholds from 10 subjects (8 female) between 19 to 47 years old (mean = 28.3; SD = 9.4) were measured using two methods with identical test hardware and calibration. Thresholds were obtained using the modified Hughson-Westlake (manual) method and the Békésy method (tracking). Measurements using each method were repeated after one-week. Test-retest variability within each measurement method was computed across test sessions. Results from each test method as well as test time across methods were compared. Results: Test-retest variability was comparable and statistically indistinguishable between the two test methods. Thresholds were approximately 5 dB lower when measured using the tracking method. This difference was not statistically significant. The manual method of measuring thresholds was faster by approximately 4 minutes. Both methods required less time (~ 2 mins) in the second session as compared to the first. Conclusion: Hearing thresholds obtained using the manual method can be just as reliable as those obtained using the tracking method over the large frequency range explored here (0.125 – 20 kHz). These results perhaps point to the importance of equivalent and valid calibration techniques that can overcome frequency dependent discrepancies, most prominent at higher frequencies, in the sound pressure delivered to the ear.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2bu0NVJ
via IFTTT