Τετάρτη 24 Μαΐου 2017

PROPIONIBACTERIUM ACNES AND CHRONIC DISEASES : P. acnes is an opportunistic pathogen, causing a range of postoperative and device-related infections e.g., surgery,post-neurosurgical infection,joint prostheses, shunts and prosthetic heart valves. P. acnes may play a role in other conditions, including inflammation of the prostate leading to cancer,SAPHO (Synovitis, Acne, Pustulosis, Hyperostosis, Osteitis) syndrome, sarcoidosis and sciatica.


P. acnes bacteria live deep within follicles and pores, away from the surface of the skin. In these follicles, P. acnes bacteria use sebum, cellular debris and metabolic byproducts from the surrounding skin tissue as their primary sources of energy and nutrients. Elevated production of sebum by hyperactive sebaceous glands (sebaceous hyperplasia) or blockage of the follicle can cause P. acnes bacteria to grow and multiply.[6]

P. acnes bacteria secrete many proteins, including several digestive enzymes.[7] These enzymes are involved in the digestion of sebum and the acquisition of other nutrients. They can also destabilize the layers of cells that form the walls of the follicle. The cellular damage, metabolic byproducts and bacterial debris produced by the rapid growth of P. acnes in follicles can trigger inflammation.[8] This inflammation can lead to the symptoms associated with some common skin disorders, such as folliculitis and acne vulgaris.[9][10][11]

The damage caused by P. acnes and the associated inflammation make the affected tissue more susceptible to colonization by opportunistic bacteria, such as Staphylococcus aureus. Preliminary research shows healthy pores are only colonized by P. acnes, while unhealthy ones universally include the nonpore-resident Staphylococcus epidermidis, amongst other bacterial contaminants. Whether this is a root causality, just opportunistic and a side effect, or a more complex pathological duality between P. acnes and this particular Staphylococcus species is not known.[12]

P. acnes has also been found in corneal ulcers, and is a common cause of chronic endophthalmitis following cataract surgery. Rarely, it infects heart valves leading to endocarditis, and infections of joints (septic arthritis) have been reported.[5] Furthermore, Propionibacterium species have been found in ventriculostomy insertion sites, and areas subcutaneous to suture sites in patients who have undergone craniotomy. It is a common contaminant in blood and cerebrospinal fluid cultures.

P. acnes has been found in herniated discs.[13] The propionic acid which it secretes creates micro-fractures of the surrounding bone. These micro-fractures are sensitive and it has been found that antibiotics have been helpful in resolving this type of low back pain.[14]

P. acnes can be found in bronchoalveolar lavage of approximately 70% of patients with sarcoidosis and is associated with disease activity, but it can be also found in 23% of controls.[15][16] The subspecies of P. acnes that cause these infections of otherwise sterile tissues (prior to medical procedures), however, are the same subspecies found on the skin of individuals who do not have acne-prone skin, so are likely local contaminants. Moderate to severe acne vulgaris appears to be more often associated with virulent strains.[17]

P. acnes is an opportunistic pathogen, causing a range of postoperative and device-related infections e.g., surgery,[18] post-neurosurgical infection,[19] joint prostheses, shunts and prosthetic heart valves. P. acnes may play a role in other conditions, including inflammation of the prostate leading to cancer,[20] SAPHO (Synovitis, Acne, Pustulosis, Hyperostosis, Osteitis) syndrome, sarcoidosis and sciatica.[21]

Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Dichotic Listening Deficit Associated With Solvent Exposure.Due to their lipophilic nature, solvents can adversely affect large white matter tracks such as the corpus callosum. Previous investigations reveal that long-term workplace exposure to solvents is also deleterious to various auditory processes.


Dichotic Listening Deficit Associated With Solvent Exposure.
από Landry, Simon P.; Fuente, Adrian στο Otology & Neurotology Published Ahead-of-Print
Μετάφραση άρθρου
Hypothesis: A significant left ear deficit can be observed in solvent-exposed individuals using the dichotic digit test. Background: Solvents are ubiquitous in global industrial processes. Due to their lipophilic nature, solvents can adversely affect large white matter tracks such as the corpus callosum. Previous investigations reveal that long-term workplace exposure to solvents is also deleterious to various auditory processes. Investigations in exposed populations suggest a decreased performance for dichotic listening. Methods: In this present study, we examined the lateralization of a dichotic digit test score for 49 solvent-exposed individuals along with 49 age- and sex-matched controls. We evaluated group differences between test scores and the right ear advantage using a laterality index (LI). Results: Individual ear results suggest that long-term workplace solvent exposure is associated with a significantly lower dichotic listening score for the left ear. A binaural compound score analysis using a laterality index supports this left-ear deficit. Conclusion: These results provide an insight on the effects of solvent exposure on dichotic listening abilities. Further research should investigate the importance of using dichotic listening tasks to screen for solvent-induced auditory dysfunction in exposed individuals. Copyright (C) 2017 by Otology & Neurotology, Inc. Image copyright (C) 2010 Wolters Kluwer Health/Anatomical Chart Company

Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Epstein-Barr virus, human endogenous retroviruses (HERVs) and human herpesvirus 6 (HHV-6) but also less common viruses such as Saffold and measles viruses are associated with multiple sclerosis



Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480

Vocal Behavior in Environmental Noise: Comparisons Between Work and Leisure Conditions in Women With Work-related Voice Disorders and Matched Controls

S08921997.gif

Publication date: Available online 24 May 2017
Source:Journal of Voice
Author(s): Annika Szabo Portela, Svante Granqvist, Sten Ternström, Maria Södersten
ObjectivesThis study aimed to assess vocal behavior in women with voice-intensive occupations to investigate differences between patients and controls and between work and leisure conditions with environmental noise level as an experimental factor.MethodsPatients with work-related voice disorders, 10 with phonasthenia and 10 with vocal nodules, were matched regarding age, profession, and workplace with 20 vocally healthy colleagues. The sound pressure level of environmental noise and the speakers' voice, fundamental frequency, and phonation ratio were registered from morning to night during 1 week with a voice accumulator. Voice data were assessed in low (≤55 dBA), moderate, and high (>70 dBA) environmental noise levels.ResultsThe average environmental noise level was significantly higher during the work condition for patients with vocal nodules (73.9 dBA) and their controls (73.0 dBA) compared with patients with phonasthenia (68.3 dBA) and their controls (67.1 dBA). The average voice level and the fundamental frequency were also significantly higher during work for the patients with vocal nodules and their controls. During the leisure condition, there were no significant differences in average noise and voice level nor fundamental frequency between the groups. The patients with vocal nodules and their controls spent significantly more time and used their voices significantly more in high–environmental noise levels.ConclusionsHigh noise levels during work and demands from the occupation impact vocal behavior. Thus, assessment of voice ergonomics should be part of the work environmental management. To reduce environmental noise levels is important to improve voice ergonomic conditions in communication-intensive and vocally demanding workplaces.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2rjiAsT
via IFTTT

The Use of the Gaps-In-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer's Disease.

Related Articles

The Use of the Gaps-In-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer's Disease.

J Am Acad Audiol. 2017 May;28(5):463-471

Authors: Iliadou VV, Bamiou DE, Sidiras C, Moschopoulos NP, Tsolaki M, Nimatoudis I, Chermak GD

Abstract
BACKGROUND: The known link between auditory perception and cognition is often overlooked when testing for cognition.
PURPOSE: To evaluate auditory perception in a group of older adults diagnosed with mild cognitive impairment (MCI).
RESEARCH DESIGN: A cross-sectional study of auditory perception.
STUDY SAMPLE: Adults with MCI and adults with no documented cognitive issues and matched hearing sensitivity and age.
DATA COLLECTION: Auditory perception was evaluated in both groups, assessing for hearing sensitivity, speech in babble (SinB), and temporal resolution.
RESULTS: Mann-Whitney test revealed significantly poorer scores for SinB and temporal resolution abilities of MCIs versus normal controls for both ears. The right-ear gap detection thresholds on the Gaps-In-Noise (GIN) Test clearly differentiated between the two groups (p < 0.001), with no overlap of values. The left ear results also differentiated the two groups (p < 0.01); however, there was a small degree of overlap ∼8-msec threshold values. With the exception of the left-ear inattentiveness index, which showed a similar distribution between groups, both impulsivity and inattentiveness indexes were higher for the MCIs compared to the control group.
CONCLUSIONS: The results support central auditory processing evaluation in the elderly population as a promising tool to achieve earlier diagnosis of dementia, while identifying central auditory processing deficits that can contribute to communication deficits in the MCI patient population. A measure of temporal resolution (GIN) may offer an early, albeit indirect, measure reflecting left temporal cortical thinning associated with the transition between MCI and Alzheimer's disease.

PMID: 28534735 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rW6zpF
via IFTTT

The Impact of Single-Sided Deafness upon Music Appreciation.

Related Articles

The Impact of Single-Sided Deafness upon Music Appreciation.

J Am Acad Audiol. 2017 May;28(5):444-462

Authors: Meehan S, Hough EA, Crundwell G, Knappett R, Smith M, Baguley DM

Abstract
BACKGROUND: Many of the world's population have hearing loss in one ear; current statistics indicate that up to 10% of the population may be affected. Although the detrimental impact of bilateral hearing loss, hearing aids, and cochlear implants upon music appreciation is well recognized, studies on the influence of single-sided deafness (SSD) are sparse.
PURPOSE: We sought to investigate whether a single-sided hearing loss can cause problems with music appreciation, despite normal hearing in the other ear.
RESEARCH DESIGN: A tailored questionnaire was used to investigate music appreciation for those with SSD.
STUDY SAMPLE: We performed a retrospective survey of a population of 51 adults from a University Hospital Audiology Department SSD clinic. SSD was predominantly adult-onset sensorineural hearing loss, caused by a variety of etiologies.
DATA ANALYSIS: Analyses were performed to assess for statistical differences between groups, for example, comparing music appreciation before and after the onset of SSD, or before and after receiving hearing aid(s).
RESULTS: Results demonstrated that a proportion of the population experienced significant changes to the way music sounded; music was found to sound more unnatural (75%), unpleasant (71%), and indistinct (81%) than before hearing loss. Music was reported to lack the perceptual qualities of stereo sound, and to be confounded by distortion effects and tinnitus. Such changes manifested in an altered music appreciation, with 44% of participants listening to music less often, 71% of participants enjoying music less, and 46% of participants reporting that music played a lesser role in their lives than pre-SSD. Negative effects surrounding social occasions with music were revealed, along with a strong preference for limiting background music. Hearing aids were not found to significantly ameliorate these effects.
CONCLUSIONS: Results could be explained in part through considerations of psychoacoustic changes intrinsic to an asymmetric hearing loss and impaired auditory scene analysis. Given the prevalence of music and its capacity to influence an individual's well-being, results here present strong indications that the potential effects of SSD on music appreciation should be considered in a clinical context; an investigation into relevant rehabilitation techniques may prove valuable.

PMID: 28534734 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2qOSyML
via IFTTT

Acute Acoustic Trauma among Soldiers during an Intense Combat.

Related Articles

Acute Acoustic Trauma among Soldiers during an Intense Combat.

J Am Acad Audiol. 2017 May;28(5):436-443

Authors: Yehudai N, Fink N, Shpriz M, Marom T

Abstract
BACKGROUND: During military actions, soldiers are constantly exposed to various forms of potentially harmful noises. Acute acoustic trauma (AAT) results from an impact, unexpected intense noise ≥140 dB, which generates a high-energy sound wave that can damage the auditory system.
PURPOSE: We sought to characterize AAT injuries among military personnel during operation "Protective Edge," to analyze the effectiveness of hearing protection devices (HPDs), and to evaluate the benefit of steroid treatment in early-diagnosed AAT injury.
RESEARCH DESIGN: We retrospectively identified affected individuals who presented to military medical facilities with solitary or combined AAT injuries within 4 mo following an intense military operation, which was characterized with an abrupt, intensive noise exposure (July-December 2014).
STUDY SAMPLE: A total of 186 participants who were referred during and shortly after a military operation with suspected AAT injury.
INTERVENTIONS: HPDs, oral steroids.
DATA COLLECTION AND ANALYSIS: Data extracted from charts and audiograms included demographics, AAT severity, worn HPDs, first and last audiograms and treatment (if given). The Student's independent samples t test was used to compare continuous variables. All tests were considered significant if p values were ≤0.05.
RESULTS: A total of 186 participants presented with hearing complaints attributed to AAT: 122, 39, and 25 were in duty service, career personnel, and reservists, with a mean age of 21.1, 29.2, and 30.4 yr, respectively. Of them, 92 (49%) participants had confirmed hearing loss in at least one ear. Hearing impairment was significantly more common in unprotected participants, when compared with protected participants: 62% (74/119) versus 45% (30/67), p < 0.05. Tinnitus was more common in unprotected participants when compared with protected participants (75% versus 49%, p = 0.04), whereas vertigo was an uncommon symptom (5% versus 2.5%, respectively, p > 0.05). In the 21 participants who received steroid treatment for early-diagnosed AAT, bone-conduction hearing thresholds significantly improved in the posttreatment audiograms, when compared with untreated participants (p < 0.01, for 1-4 kHz).
CONCLUSIONS: AAT is a common military injury, and should be diagnosed early to minimize associated morbidity. HPDs were proven to be effective in preventing and minimizing AAT hearing sequelae. Steroid treatment was effective in AAT injury, if initiated within 7 days after noise exposure.

PMID: 28534733 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rPtEeB
via IFTTT

Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.

Related Articles

Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.

J Am Acad Audiol. 2017 May;28(5):415-435

Authors: Wolfe J, Duke M, Schafer E, Jones C, Rakita L

Abstract
BACKGROUND: Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations.
PURPOSE: The objective of this study was to compare speech recognition, speech intelligibility ratings (SIRs), and sound preferences of children using hearing aids equipped with and without adaptive noise management technologies.
RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences obtained in four simulated environments. In each simulated environment, participants were tested in a basic listening program with minimal noise management features, a manual program designed for that scene, and the hearing instruments' adaptive operating system that steered hearing instrument parameterization based on the characteristics of the environment.
STUDY SAMPLE: Twelve children with mild to moderately severe sensorineural hearing loss.
DATA COLLECTION AND ANALYSIS: Speech recognition and SIRs were evaluated in three hearing aid programs with and without noise management technologies across two different test sessions and various listening environments. Also, the participants' perceptual hearing performance in daily real-world listening situations with two of the hearing aid programs was evaluated during a four- to six-week field trial that took place between the two laboratory sessions.
RESULTS: On average, the use of adaptive noise management technology improved sentence recognition in noise for speech presented in front of the participant but resulted in a decrement in performance for signals arriving from behind when the participant was facing forward. However, the improvement with adaptive noise management exceeded the decrement obtained when the signal arrived from behind. Most participants reported better subjective SIRs when using adaptive noise management technologies, particularly when the signal of interest arrived from in front of the listener. In addition, most participants reported a preference for the technology with an automatically switching, adaptive directional microphone and adaptive noise reduction in real-world listening situations when compared to conventional, omnidirectional microphone use with minimal noise reduction processing.
CONCLUSIONS: Use of the adaptive noise management technologies evaluated in this study improves school-age children's speech recognition in noise for signals arriving from the front. Although a small decrement in speech recognition in noise was observed for signals arriving from behind the listener, most participants reported a preference for use of noise management technology both when the signal arrived from in front and from behind the child. The results of this study suggest that adaptive noise management technologies should be considered for use with school-age children when listening in academic and social situations.

PMID: 28534732 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rQaFRd
via IFTTT

Speech Recognition in Nonnative versus Native English-Speaking College Students in a Virtual Classroom.

Related Articles

Speech Recognition in Nonnative versus Native English-Speaking College Students in a Virtual Classroom.

J Am Acad Audiol. 2017 May;28(5):404-414

Authors: Neave-DiToro D, Rubinstein A, Neuman AC

Abstract
BACKGROUND: Limited attention has been given to the effects of classroom acoustics at the college level. Many studies have reported that nonnative speakers of English are more likely to be affected by poor room acoustics than native speakers. An important question is how classroom acoustics affect speech perception of nonnative college students.
PURPOSE: The combined effect of noise and reverberation on the speech recognition performance of college students who differ in age of English acquisition was evaluated under conditions simulating classrooms with reverberation times (RTs) close to ANSI recommended RTs.
RESEARCH DESIGN: A mixed design was used in this study.
STUDY SAMPLE: Thirty-six native and nonnative English-speaking college students with normal hearing, ages 18-28 yr, participated.
INTERVENTION: Two groups of nine native participants (native monolingual [NM] and native bilingual) and two groups of nine nonnative participants (nonnative early and nonnative late) were evaluated in noise under three reverberant conditions (0.03, 0.06, and 0.08 sec).
DATA COLLECTION AND ANALYSIS: A virtual test paradigm was used, which represented a signal reaching a student at the back of a classroom. Speech recognition in noise was measured using the Bamford-Kowal-Bench Speech-in-Noise (BKB-SIN) test and signal-to-noise ratio required for correct repetition of 50% of the key words in the stimulus sentences (SNR-50) was obtained for each group in each reverberant condition. A mixed-design analysis of variance was used to determine statistical significance as a function of listener group and RT.
RESULTS: SNR-50 was significantly higher for nonnative listeners as compared to native listeners, and a more favorable SNR-50 was needed as RT increased. The most dramatic effect on SNR-50 was found in the group with later acquisition of English, whereas the impact of early introduction of a second language was subtler. At the ANSI standard's maximum recommended RT (0.6 sec), all groups except the NM group exhibited a mild signal-to-noise ratio (SNR) loss. At the 0.8 sec RT, all groups exhibited a mild SNR loss.
CONCLUSION: Acoustics in the classroom are an important consideration for nonnative speakers who are proficient in English and enrolled in college. To address the need for a clearer speech signal by nonnative students (and for all students), universities should follow ANSI recommendations, as well as minimize background noise in occupied classrooms. Behavioral/instructional strategies should be considered to address factors that cannot be compensated for through acoustic design.

PMID: 28534731 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rgsf3q
via IFTTT

Big Stimulus, Little Ears: Safety in Administering Vestibular-Evoked Myogenic Potentials in Children.

Related Articles

Big Stimulus, Little Ears: Safety in Administering Vestibular-Evoked Myogenic Potentials in Children.

J Am Acad Audiol. 2017 May;28(5):395-403

Authors: Thomas MLA, Fitzpatrick D, McCreery R, Janky KL

Abstract
BACKGROUND: Cervical and ocular vestibular-evoked myogenic potentials (VEMPs) have become common clinical vestibular assessments. However, VEMP testing requires high intensity stimuli, raising concerns regarding safety with children, where sound pressure levels may be higher due to their smaller ear canal volumes.
PURPOSE: The purpose of this study was to estimate the range of peak-to-peak equivalent sound pressure levels (peSPLs) in child and adult ears in response to high intensity stimuli (i.e., 100 dB normal hearing level [nHL]) commonly used for VEMP testing and make a determination of whether acoustic stimuli levels with VEMP testing are safe for use in children.
RESEARCH DESIGN: Prospective experimental.
STUDY SAMPLE: Ten children (4-6 years) and ten young adults (24-35 years) with normal hearing sensitivity and middle ear function participated in the study.
DATA COLLECTION AND ANALYSIS: Probe microphone peSPL measurements of clicks and 500 Hz tonebursts (TBs) were recorded in tubes of small, medium, and large diameter, and in a Brüel & Kjær Ear Simulator Type 4157 to assess for linearity of the stimulus at high levels. The different diameter tubes were used to approximate the range of cross-sectional areas in infant, child, and adult ears, respectively. Equivalent ear canal volume and peSPL measurements were then recorded in child and adult ears. Lower intensity levels were used in the participant's ears to limit exposure to high intensity sound. The peSPL measurements in participant ears were extrapolated using predictions from linear mixed models to determine if equivalent ear canal volume significantly contributed to overall peSPL and to estimate the mean and 95% confidence intervals of peSPLs in child and adult ears when high intensity stimulus levels (100 dB nHL) are used for VEMP testing without exposing subjects to high-intensity stimuli.
RESULTS: Measurements from the coupler and tubes suggested: 1) each stimuli was linear, 2) there were no distortions or nonlinearities at high levels, and 3) peSPL increased with decreased tube diameter. Measurements in participant ears suggested: 1) peSPL was approximately 3 dB larger in child compared to adult ears, and 2) peSPL was larger in response to clicks compared to 500 Hz TBs. The model predicted the following 95% confidence interval for a 100 dB nHL click: 127-136.5 dB peSPL in adult ears and 128.7-138.2 dB peSPL in child ears. The model predicted the following 95% confidence interval for a 100 dB nHL 500 Hz TB stimulus: 122.2-128.2 dB peSPL in adult ears and 124.8-130.8 dB peSPL in child ears.
CONCLUSIONS: Our findings suggest that 1) when completing VEMP testing, the stimulus is approximately 3 dB higher in a child's ear, 2) a 500 Hz TB is recommended over a click as it has lower peSPL compared to the click, and 3) both duration and intensity should be considered when choosing VEMP stimuli. Calculating the total sound energy exposure for your chosen stimuli is recommended as it accounts for both duration and intensity. When using this calculation for children, consider adding 3 dB to the stimulus level.

PMID: 28534730 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rghMVM
via IFTTT

Self-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear Implant.

Related Articles

Self-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear Implant.

J Am Acad Audiol. 2017 May;28(5):385-394

Authors: Fitzgerald MB, Prosolovich K, Tan CT, Glassman EK, Svirsky MA

Abstract
BACKGROUND: Many recipients of bilateral cochlear implants (CIs) may have differences in electrode insertion depth. Previous reports indicate that when a bilateral mismatch is imposed, performance on tests of speech understanding or sound localization becomes worse. If recipients of bilateral CIs cannot adjust to a difference in insertion depth, adjustments to the frequency table may be necessary to maximize bilateral performance.
PURPOSE: The purpose of this study was to examine the feasibility of using real-time manipulations of the frequency table to offset any decrements in performance resulting from a bilateral mismatch.
RESEARCH DESIGN: A simulation of a CI was used because it allows for explicit control of the size of a bilateral mismatch. Such control is not available with users of CIs.
STUDY SAMPLE: A total of 31 normal-hearing young adults participated in this study.
DATA COLLECTION AND ANALYSIS: Using a CI simulation, four bilateral mismatch conditions (0, 0.75, 1.5, and 3 mm) were created. In the left ear, the analysis filters and noise bands of the CI simulation were the same. In the right ear, the noise bands were shifted higher in frequency to simulate a bilateral mismatch. Then, listeners selected a frequency table in the right ear that was perceived as maximizing bilateral speech intelligibility. Word-recognition scores were then assessed for each bilateral mismatch condition. Listeners were tested with both a standard frequency table, which preserved a bilateral mismatch, or with their self-selected frequency table.
RESULTS: Consistent with previous reports, bilateral mismatches of 1.5 and 3 mm yielded decrements in word recognition when the standard table was used in both ears. However, when listeners used the self-selected frequency table, performance was the same regardless of the size of the bilateral mismatch.
CONCLUSIONS: Self-selection of a frequency table appears to be a feasible method for ameliorating the negative effects of a bilateral mismatch. These data may have implications for recipients of bilateral CIs who cannot adapt to a bilateral mismatch, because they suggest that (1) such individuals may benefit from modification of the frequency table in one ear and (2) self-selection of a "most intelligible" frequency table may be a useful tool for determining how the frequency table should be altered to optimize speech recognition.

PMID: 28534729 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rgopav
via IFTTT

Hearing Loss and Age-Induced Changes in the Central Auditory System Measured by the P3 Response to Small Changes in Frequency.

Related Articles

Hearing Loss and Age-Induced Changes in the Central Auditory System Measured by the P3 Response to Small Changes in Frequency.

J Am Acad Audiol. 2017 May;28(5):373-384

Authors: Vander Werff KR, Nesbitt KL

Abstract
BACKGROUND: Recent behavioral studies have suggested that individuals with sloping audiograms exhibit localized improvements in frequency discrimination in the frequency region near the drop in hearing. Auditory-evoked potentials may provide evidence of such cortical plasticity and reorganization of frequency maps.
PURPOSE: The objective of this study was to evaluate electrophysiological evidence of cortical plasticity related to cortical frequency representation and discrimination abilities in older individuals with high-frequency sensorineural hearing loss (SNHL). It was hypothesized that the P3 response in this group would show evidence of physiological reorganization of frequency maps and enhanced neural representation at the edge of their high-frequency loss due to their restricted SNHL.
RESEARCH DESIGN: The P3 auditory event-related potential in response to small frequency changes was recorded in a repeated measures design using an oddball paradigm that presented upward and downward frequency changes of 2%, 5%, and 20% to three groups of listeners.
STUDY SAMPLE: P3 recordings from a group of seven older individuals with a restricted sloping hearing loss >1000 or 2000 Hz was compared to two control groups of younger (n = 7) and older (n = 7) individuals with normal hearing/borderline normal hearing through 4000 Hz.
DATA COLLECTION AND ANALYSIS: The auditory P3 was recorded using an oddball paradigm (80%/20%) with the standard tone at the highest frequency of normal hearing in the hearing-impaired participants, also known as the edge frequency (EF). EFs were either 1000 or 2000 Hz for all participants. The target tones represented upward and downward frequency changes of 2%, 5%, and 20% from the standard tones of either 1000 or 2000 Hz. Waveforms were recorded using a two-channel clinical-evoked potential system. Latency and amplitude of the P300 peak were analyzed across groups for the three frequency conditions using repeated measures analysis of variance.
RESULTS: The results of this study suggest that the P3 response can be elicited by frequency changes as small as 2-5%. P3 responses at the EF of hearing loss were present and larger in amplitude for more participants with a sloping hearing loss compared to age-matched normal-hearing peers tested at the same frequencies. As a result, the older participants with sloping hearing losses had P3 responses more similar to the younger normal-hearing participants than their age-matched peers with normal hearing.
CONCLUSIONS: These preliminary results partially support the idea of enhanced cortical representation of frequency at the EF of localized SNHL in older adults that is not purely due to age.

PMID: 28534728 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rgtNKT
via IFTTT

Under Pressure: Vestibular-Evoked Myogenic Potentials and the Auditory Stimuli That Evoke Them.

Related Articles

Under Pressure: Vestibular-Evoked Myogenic Potentials and the Auditory Stimuli That Evoke Them.

J Am Acad Audiol. 2017 May;28(5):372

Authors: McCaslin DL

PMID: 28534727 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rPnmLL
via IFTTT

The Use of the Gaps-In-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer's Disease.

Related Articles

The Use of the Gaps-In-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer's Disease.

J Am Acad Audiol. 2017 May;28(5):463-471

Authors: Iliadou VV, Bamiou DE, Sidiras C, Moschopoulos NP, Tsolaki M, Nimatoudis I, Chermak GD

Abstract
BACKGROUND: The known link between auditory perception and cognition is often overlooked when testing for cognition.
PURPOSE: To evaluate auditory perception in a group of older adults diagnosed with mild cognitive impairment (MCI).
RESEARCH DESIGN: A cross-sectional study of auditory perception.
STUDY SAMPLE: Adults with MCI and adults with no documented cognitive issues and matched hearing sensitivity and age.
DATA COLLECTION: Auditory perception was evaluated in both groups, assessing for hearing sensitivity, speech in babble (SinB), and temporal resolution.
RESULTS: Mann-Whitney test revealed significantly poorer scores for SinB and temporal resolution abilities of MCIs versus normal controls for both ears. The right-ear gap detection thresholds on the Gaps-In-Noise (GIN) Test clearly differentiated between the two groups (p < 0.001), with no overlap of values. The left ear results also differentiated the two groups (p < 0.01); however, there was a small degree of overlap ∼8-msec threshold values. With the exception of the left-ear inattentiveness index, which showed a similar distribution between groups, both impulsivity and inattentiveness indexes were higher for the MCIs compared to the control group.
CONCLUSIONS: The results support central auditory processing evaluation in the elderly population as a promising tool to achieve earlier diagnosis of dementia, while identifying central auditory processing deficits that can contribute to communication deficits in the MCI patient population. A measure of temporal resolution (GIN) may offer an early, albeit indirect, measure reflecting left temporal cortical thinning associated with the transition between MCI and Alzheimer's disease.

PMID: 28534735 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rW6zpF
via IFTTT

The Impact of Single-Sided Deafness upon Music Appreciation.

Related Articles

The Impact of Single-Sided Deafness upon Music Appreciation.

J Am Acad Audiol. 2017 May;28(5):444-462

Authors: Meehan S, Hough EA, Crundwell G, Knappett R, Smith M, Baguley DM

Abstract
BACKGROUND: Many of the world's population have hearing loss in one ear; current statistics indicate that up to 10% of the population may be affected. Although the detrimental impact of bilateral hearing loss, hearing aids, and cochlear implants upon music appreciation is well recognized, studies on the influence of single-sided deafness (SSD) are sparse.
PURPOSE: We sought to investigate whether a single-sided hearing loss can cause problems with music appreciation, despite normal hearing in the other ear.
RESEARCH DESIGN: A tailored questionnaire was used to investigate music appreciation for those with SSD.
STUDY SAMPLE: We performed a retrospective survey of a population of 51 adults from a University Hospital Audiology Department SSD clinic. SSD was predominantly adult-onset sensorineural hearing loss, caused by a variety of etiologies.
DATA ANALYSIS: Analyses were performed to assess for statistical differences between groups, for example, comparing music appreciation before and after the onset of SSD, or before and after receiving hearing aid(s).
RESULTS: Results demonstrated that a proportion of the population experienced significant changes to the way music sounded; music was found to sound more unnatural (75%), unpleasant (71%), and indistinct (81%) than before hearing loss. Music was reported to lack the perceptual qualities of stereo sound, and to be confounded by distortion effects and tinnitus. Such changes manifested in an altered music appreciation, with 44% of participants listening to music less often, 71% of participants enjoying music less, and 46% of participants reporting that music played a lesser role in their lives than pre-SSD. Negative effects surrounding social occasions with music were revealed, along with a strong preference for limiting background music. Hearing aids were not found to significantly ameliorate these effects.
CONCLUSIONS: Results could be explained in part through considerations of psychoacoustic changes intrinsic to an asymmetric hearing loss and impaired auditory scene analysis. Given the prevalence of music and its capacity to influence an individual's well-being, results here present strong indications that the potential effects of SSD on music appreciation should be considered in a clinical context; an investigation into relevant rehabilitation techniques may prove valuable.

PMID: 28534734 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2qOSyML
via IFTTT

Acute Acoustic Trauma among Soldiers during an Intense Combat.

Related Articles

Acute Acoustic Trauma among Soldiers during an Intense Combat.

J Am Acad Audiol. 2017 May;28(5):436-443

Authors: Yehudai N, Fink N, Shpriz M, Marom T

Abstract
BACKGROUND: During military actions, soldiers are constantly exposed to various forms of potentially harmful noises. Acute acoustic trauma (AAT) results from an impact, unexpected intense noise ≥140 dB, which generates a high-energy sound wave that can damage the auditory system.
PURPOSE: We sought to characterize AAT injuries among military personnel during operation "Protective Edge," to analyze the effectiveness of hearing protection devices (HPDs), and to evaluate the benefit of steroid treatment in early-diagnosed AAT injury.
RESEARCH DESIGN: We retrospectively identified affected individuals who presented to military medical facilities with solitary or combined AAT injuries within 4 mo following an intense military operation, which was characterized with an abrupt, intensive noise exposure (July-December 2014).
STUDY SAMPLE: A total of 186 participants who were referred during and shortly after a military operation with suspected AAT injury.
INTERVENTIONS: HPDs, oral steroids.
DATA COLLECTION AND ANALYSIS: Data extracted from charts and audiograms included demographics, AAT severity, worn HPDs, first and last audiograms and treatment (if given). The Student's independent samples t test was used to compare continuous variables. All tests were considered significant if p values were ≤0.05.
RESULTS: A total of 186 participants presented with hearing complaints attributed to AAT: 122, 39, and 25 were in duty service, career personnel, and reservists, with a mean age of 21.1, 29.2, and 30.4 yr, respectively. Of them, 92 (49%) participants had confirmed hearing loss in at least one ear. Hearing impairment was significantly more common in unprotected participants, when compared with protected participants: 62% (74/119) versus 45% (30/67), p < 0.05. Tinnitus was more common in unprotected participants when compared with protected participants (75% versus 49%, p = 0.04), whereas vertigo was an uncommon symptom (5% versus 2.5%, respectively, p > 0.05). In the 21 participants who received steroid treatment for early-diagnosed AAT, bone-conduction hearing thresholds significantly improved in the posttreatment audiograms, when compared with untreated participants (p < 0.01, for 1-4 kHz).
CONCLUSIONS: AAT is a common military injury, and should be diagnosed early to minimize associated morbidity. HPDs were proven to be effective in preventing and minimizing AAT hearing sequelae. Steroid treatment was effective in AAT injury, if initiated within 7 days after noise exposure.

PMID: 28534733 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rPtEeB
via IFTTT

Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.

Related Articles

Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.

J Am Acad Audiol. 2017 May;28(5):415-435

Authors: Wolfe J, Duke M, Schafer E, Jones C, Rakita L

Abstract
BACKGROUND: Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations.
PURPOSE: The objective of this study was to compare speech recognition, speech intelligibility ratings (SIRs), and sound preferences of children using hearing aids equipped with and without adaptive noise management technologies.
RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences obtained in four simulated environments. In each simulated environment, participants were tested in a basic listening program with minimal noise management features, a manual program designed for that scene, and the hearing instruments' adaptive operating system that steered hearing instrument parameterization based on the characteristics of the environment.
STUDY SAMPLE: Twelve children with mild to moderately severe sensorineural hearing loss.
DATA COLLECTION AND ANALYSIS: Speech recognition and SIRs were evaluated in three hearing aid programs with and without noise management technologies across two different test sessions and various listening environments. Also, the participants' perceptual hearing performance in daily real-world listening situations with two of the hearing aid programs was evaluated during a four- to six-week field trial that took place between the two laboratory sessions.
RESULTS: On average, the use of adaptive noise management technology improved sentence recognition in noise for speech presented in front of the participant but resulted in a decrement in performance for signals arriving from behind when the participant was facing forward. However, the improvement with adaptive noise management exceeded the decrement obtained when the signal arrived from behind. Most participants reported better subjective SIRs when using adaptive noise management technologies, particularly when the signal of interest arrived from in front of the listener. In addition, most participants reported a preference for the technology with an automatically switching, adaptive directional microphone and adaptive noise reduction in real-world listening situations when compared to conventional, omnidirectional microphone use with minimal noise reduction processing.
CONCLUSIONS: Use of the adaptive noise management technologies evaluated in this study improves school-age children's speech recognition in noise for signals arriving from the front. Although a small decrement in speech recognition in noise was observed for signals arriving from behind the listener, most participants reported a preference for use of noise management technology both when the signal arrived from in front and from behind the child. The results of this study suggest that adaptive noise management technologies should be considered for use with school-age children when listening in academic and social situations.

PMID: 28534732 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rQaFRd
via IFTTT

Speech Recognition in Nonnative versus Native English-Speaking College Students in a Virtual Classroom.

Related Articles

Speech Recognition in Nonnative versus Native English-Speaking College Students in a Virtual Classroom.

J Am Acad Audiol. 2017 May;28(5):404-414

Authors: Neave-DiToro D, Rubinstein A, Neuman AC

Abstract
BACKGROUND: Limited attention has been given to the effects of classroom acoustics at the college level. Many studies have reported that nonnative speakers of English are more likely to be affected by poor room acoustics than native speakers. An important question is how classroom acoustics affect speech perception of nonnative college students.
PURPOSE: The combined effect of noise and reverberation on the speech recognition performance of college students who differ in age of English acquisition was evaluated under conditions simulating classrooms with reverberation times (RTs) close to ANSI recommended RTs.
RESEARCH DESIGN: A mixed design was used in this study.
STUDY SAMPLE: Thirty-six native and nonnative English-speaking college students with normal hearing, ages 18-28 yr, participated.
INTERVENTION: Two groups of nine native participants (native monolingual [NM] and native bilingual) and two groups of nine nonnative participants (nonnative early and nonnative late) were evaluated in noise under three reverberant conditions (0.03, 0.06, and 0.08 sec).
DATA COLLECTION AND ANALYSIS: A virtual test paradigm was used, which represented a signal reaching a student at the back of a classroom. Speech recognition in noise was measured using the Bamford-Kowal-Bench Speech-in-Noise (BKB-SIN) test and signal-to-noise ratio required for correct repetition of 50% of the key words in the stimulus sentences (SNR-50) was obtained for each group in each reverberant condition. A mixed-design analysis of variance was used to determine statistical significance as a function of listener group and RT.
RESULTS: SNR-50 was significantly higher for nonnative listeners as compared to native listeners, and a more favorable SNR-50 was needed as RT increased. The most dramatic effect on SNR-50 was found in the group with later acquisition of English, whereas the impact of early introduction of a second language was subtler. At the ANSI standard's maximum recommended RT (0.6 sec), all groups except the NM group exhibited a mild signal-to-noise ratio (SNR) loss. At the 0.8 sec RT, all groups exhibited a mild SNR loss.
CONCLUSION: Acoustics in the classroom are an important consideration for nonnative speakers who are proficient in English and enrolled in college. To address the need for a clearer speech signal by nonnative students (and for all students), universities should follow ANSI recommendations, as well as minimize background noise in occupied classrooms. Behavioral/instructional strategies should be considered to address factors that cannot be compensated for through acoustic design.

PMID: 28534731 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rgsf3q
via IFTTT

Big Stimulus, Little Ears: Safety in Administering Vestibular-Evoked Myogenic Potentials in Children.

Related Articles

Big Stimulus, Little Ears: Safety in Administering Vestibular-Evoked Myogenic Potentials in Children.

J Am Acad Audiol. 2017 May;28(5):395-403

Authors: Thomas MLA, Fitzpatrick D, McCreery R, Janky KL

Abstract
BACKGROUND: Cervical and ocular vestibular-evoked myogenic potentials (VEMPs) have become common clinical vestibular assessments. However, VEMP testing requires high intensity stimuli, raising concerns regarding safety with children, where sound pressure levels may be higher due to their smaller ear canal volumes.
PURPOSE: The purpose of this study was to estimate the range of peak-to-peak equivalent sound pressure levels (peSPLs) in child and adult ears in response to high intensity stimuli (i.e., 100 dB normal hearing level [nHL]) commonly used for VEMP testing and make a determination of whether acoustic stimuli levels with VEMP testing are safe for use in children.
RESEARCH DESIGN: Prospective experimental.
STUDY SAMPLE: Ten children (4-6 years) and ten young adults (24-35 years) with normal hearing sensitivity and middle ear function participated in the study.
DATA COLLECTION AND ANALYSIS: Probe microphone peSPL measurements of clicks and 500 Hz tonebursts (TBs) were recorded in tubes of small, medium, and large diameter, and in a Brüel & Kjær Ear Simulator Type 4157 to assess for linearity of the stimulus at high levels. The different diameter tubes were used to approximate the range of cross-sectional areas in infant, child, and adult ears, respectively. Equivalent ear canal volume and peSPL measurements were then recorded in child and adult ears. Lower intensity levels were used in the participant's ears to limit exposure to high intensity sound. The peSPL measurements in participant ears were extrapolated using predictions from linear mixed models to determine if equivalent ear canal volume significantly contributed to overall peSPL and to estimate the mean and 95% confidence intervals of peSPLs in child and adult ears when high intensity stimulus levels (100 dB nHL) are used for VEMP testing without exposing subjects to high-intensity stimuli.
RESULTS: Measurements from the coupler and tubes suggested: 1) each stimuli was linear, 2) there were no distortions or nonlinearities at high levels, and 3) peSPL increased with decreased tube diameter. Measurements in participant ears suggested: 1) peSPL was approximately 3 dB larger in child compared to adult ears, and 2) peSPL was larger in response to clicks compared to 500 Hz TBs. The model predicted the following 95% confidence interval for a 100 dB nHL click: 127-136.5 dB peSPL in adult ears and 128.7-138.2 dB peSPL in child ears. The model predicted the following 95% confidence interval for a 100 dB nHL 500 Hz TB stimulus: 122.2-128.2 dB peSPL in adult ears and 124.8-130.8 dB peSPL in child ears.
CONCLUSIONS: Our findings suggest that 1) when completing VEMP testing, the stimulus is approximately 3 dB higher in a child's ear, 2) a 500 Hz TB is recommended over a click as it has lower peSPL compared to the click, and 3) both duration and intensity should be considered when choosing VEMP stimuli. Calculating the total sound energy exposure for your chosen stimuli is recommended as it accounts for both duration and intensity. When using this calculation for children, consider adding 3 dB to the stimulus level.

PMID: 28534730 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rghMVM
via IFTTT

Self-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear Implant.

Related Articles

Self-Selection of Frequency Tables with Bilateral Mismatches in an Acoustic Simulation of a Cochlear Implant.

J Am Acad Audiol. 2017 May;28(5):385-394

Authors: Fitzgerald MB, Prosolovich K, Tan CT, Glassman EK, Svirsky MA

Abstract
BACKGROUND: Many recipients of bilateral cochlear implants (CIs) may have differences in electrode insertion depth. Previous reports indicate that when a bilateral mismatch is imposed, performance on tests of speech understanding or sound localization becomes worse. If recipients of bilateral CIs cannot adjust to a difference in insertion depth, adjustments to the frequency table may be necessary to maximize bilateral performance.
PURPOSE: The purpose of this study was to examine the feasibility of using real-time manipulations of the frequency table to offset any decrements in performance resulting from a bilateral mismatch.
RESEARCH DESIGN: A simulation of a CI was used because it allows for explicit control of the size of a bilateral mismatch. Such control is not available with users of CIs.
STUDY SAMPLE: A total of 31 normal-hearing young adults participated in this study.
DATA COLLECTION AND ANALYSIS: Using a CI simulation, four bilateral mismatch conditions (0, 0.75, 1.5, and 3 mm) were created. In the left ear, the analysis filters and noise bands of the CI simulation were the same. In the right ear, the noise bands were shifted higher in frequency to simulate a bilateral mismatch. Then, listeners selected a frequency table in the right ear that was perceived as maximizing bilateral speech intelligibility. Word-recognition scores were then assessed for each bilateral mismatch condition. Listeners were tested with both a standard frequency table, which preserved a bilateral mismatch, or with their self-selected frequency table.
RESULTS: Consistent with previous reports, bilateral mismatches of 1.5 and 3 mm yielded decrements in word recognition when the standard table was used in both ears. However, when listeners used the self-selected frequency table, performance was the same regardless of the size of the bilateral mismatch.
CONCLUSIONS: Self-selection of a frequency table appears to be a feasible method for ameliorating the negative effects of a bilateral mismatch. These data may have implications for recipients of bilateral CIs who cannot adapt to a bilateral mismatch, because they suggest that (1) such individuals may benefit from modification of the frequency table in one ear and (2) self-selection of a "most intelligible" frequency table may be a useful tool for determining how the frequency table should be altered to optimize speech recognition.

PMID: 28534729 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rgopav
via IFTTT

Hearing Loss and Age-Induced Changes in the Central Auditory System Measured by the P3 Response to Small Changes in Frequency.

Related Articles

Hearing Loss and Age-Induced Changes in the Central Auditory System Measured by the P3 Response to Small Changes in Frequency.

J Am Acad Audiol. 2017 May;28(5):373-384

Authors: Vander Werff KR, Nesbitt KL

Abstract
BACKGROUND: Recent behavioral studies have suggested that individuals with sloping audiograms exhibit localized improvements in frequency discrimination in the frequency region near the drop in hearing. Auditory-evoked potentials may provide evidence of such cortical plasticity and reorganization of frequency maps.
PURPOSE: The objective of this study was to evaluate electrophysiological evidence of cortical plasticity related to cortical frequency representation and discrimination abilities in older individuals with high-frequency sensorineural hearing loss (SNHL). It was hypothesized that the P3 response in this group would show evidence of physiological reorganization of frequency maps and enhanced neural representation at the edge of their high-frequency loss due to their restricted SNHL.
RESEARCH DESIGN: The P3 auditory event-related potential in response to small frequency changes was recorded in a repeated measures design using an oddball paradigm that presented upward and downward frequency changes of 2%, 5%, and 20% to three groups of listeners.
STUDY SAMPLE: P3 recordings from a group of seven older individuals with a restricted sloping hearing loss >1000 or 2000 Hz was compared to two control groups of younger (n = 7) and older (n = 7) individuals with normal hearing/borderline normal hearing through 4000 Hz.
DATA COLLECTION AND ANALYSIS: The auditory P3 was recorded using an oddball paradigm (80%/20%) with the standard tone at the highest frequency of normal hearing in the hearing-impaired participants, also known as the edge frequency (EF). EFs were either 1000 or 2000 Hz for all participants. The target tones represented upward and downward frequency changes of 2%, 5%, and 20% from the standard tones of either 1000 or 2000 Hz. Waveforms were recorded using a two-channel clinical-evoked potential system. Latency and amplitude of the P300 peak were analyzed across groups for the three frequency conditions using repeated measures analysis of variance.
RESULTS: The results of this study suggest that the P3 response can be elicited by frequency changes as small as 2-5%. P3 responses at the EF of hearing loss were present and larger in amplitude for more participants with a sloping hearing loss compared to age-matched normal-hearing peers tested at the same frequencies. As a result, the older participants with sloping hearing losses had P3 responses more similar to the younger normal-hearing participants than their age-matched peers with normal hearing.
CONCLUSIONS: These preliminary results partially support the idea of enhanced cortical representation of frequency at the EF of localized SNHL in older adults that is not purely due to age.

PMID: 28534728 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rgtNKT
via IFTTT

Under Pressure: Vestibular-Evoked Myogenic Potentials and the Auditory Stimuli That Evoke Them.

Related Articles

Under Pressure: Vestibular-Evoked Myogenic Potentials and the Auditory Stimuli That Evoke Them.

J Am Acad Audiol. 2017 May;28(5):372

Authors: McCaslin DL

PMID: 28534727 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2rPnmLL
via IFTTT

Prevalence of Auditory Problems in Children With Feeding and Swallowing Disorders

Purpose
Although an interdisciplinary approach is recommended for assessment and management of feeding or swallowing difficulties, audiologists are not always included in the interdisciplinary team. The purpose of this study is to report the prevalence of middle ear and hearing problems in children with feeding and swallowing disorders and to compare this prevalence with that in typical children.
Method
A total of 103 children were included in the study: 44 children with feeding and swallowing disorders and 59 children without any such disorders. Audiological examinations included case-history information, visualization of the ear canals through otoscopy, middle ear evaluation through tympanometry, and hearing screenings using an audiometer.
Results
The odds of excessive cerumen (p = .0000, small effect size), middle ear dysfunction (p = .0148, small effect size), and hearing screening failure (p = .0000, large effect size) were 22.14%, 2.97%, and 13.5% higher, respectively, in children with feeding and swallowing disorders compared with typically developing children.
Conclusion
The significantly higher prevalence of hearing problems in children with feeding and swallowing disorders compared with typically developing children suggests that inclusion of an audiologist on the interdisciplinary team is likely to improve overall interventional outcomes for children with feeding and swallowing disorders.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ps2Nnn
via IFTTT

Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

Purpose
Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes.
Method
Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing.
Results
CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information.
Conclusion
CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.

from #Audiology via ola Kala on Inoreader http://ift.tt/2oZtEHu
via IFTTT

Spoken Language Production in Young Adults: Examining Syntactic Complexity

Purpose
In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment.
Method
Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density.
Results
Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task.
Conclusion
Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qTxLss
via IFTTT

Efficacy of Visual–Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study

Purpose
This study documented the efficacy of visual–acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of sessions to treatment conditions.
Method
Seven child and adolescent participants received 20 half-hour sessions of individual treatment over 10 weeks. Within each week, sessions were randomly assigned to feature traditional or biofeedback intervention. Perceptual accuracy of rhotic production was assessed in a blinded, randomized fashion. Each participant's response to the combined treatment package was evaluated by using effect sizes and visual inspection. Differences in the magnitude of response to traditional versus biofeedback intervention were measured with individual randomization tests.
Results
Four of 7 participants demonstrated a clinically meaningful response to the combined treatment package. Three of 7 participants showed a statistically significant difference between treatment conditions. In all 3 cases, the magnitude of within-session gains associated with biofeedback exceeded the gains associated with traditional treatment.
Conclusions
These results suggest that the inclusion of visual–acoustic biofeedback can enhance the efficacy of intervention for some individuals with residual rhotic errors. Further research is needed to understand which participants represent better or poorer candidates for biofeedback treatment.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ojCXDV
via IFTTT

Normative Study of Wideband Acoustic Immittance Measures in Newborn Infants

Objective
The purpose of this study was to describe normative aspects of wideband acoustic immittance (WAI) measures obtained from healthy White neonates.
Method
In this cross-sectional study, wideband absorbance (WBA), admittance magnitude, and admittance phase were measured under ambient pressure condition in 326 ears from 203 neonates (M age = 45.9 hr) who passed a battery of tests, including automated auditory brainstem response, high-frequency tympanometry, and distortion product otoacoustic emissions.
Results
Normative WBA data were in agreement with most previous studies. Normative data for both WBA and admittance magnitude revealed double-peaked patterns with the 1st peak at 1.25–2 kHz and the 2nd peak at 5–8 kHz, while normative admittance phase data showed 2 peaks at 0.8 and 4 kHz. There were no significant differences between ears or gender for the 3 WAI measures. Standard deviations for all 3 measures were highest at frequencies above 4 kHz.
Conclusions
The 3 WAI measures between 1 kHz and 4 kHz may provide the most stable response of the outer and middle ear. WAI measures at frequencies above 4 kHz were more variable. The normative data established in the present study may serve as a reference for evaluating outer and middle ear function in neonates.

from #Audiology via ola Kala on Inoreader http://ift.tt/2ojBPjm
via IFTTT

Speech Inconsistency in Children With Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

Purpose
The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and speech delay.
Method
Participants included 48 children ranging between 4;7 to 17;8 (years;months) with CAS (n = 10), CAS + language impairment (n = 10), speech delay (n = 10), language impairment (n = 9), or typical development (n = 9). Speech inconsistency was assessed at phonemic and token-to-token levels using a variety of stimuli.
Results
Children with CAS and CAS + language impairment performed equivalently on all inconsistency assessments. Children with language impairment evidenced high levels of speech inconsistency on the phrase “buy Bobby a puppy.” Token-to-token inconsistency of monosyllabic words and the phrase “buy Bobby a puppy” was sensitive and specific in differentiating children with CAS and speech delay, whereas inconsistency calculated on other stimuli (e.g., multisyllabic words) was less efficacious in differentiating between these disorders.
Conclusions
Speech inconsistency is a core feature of CAS and is efficacious in differentiating between children with CAS and speech delay; however, sensitivity and specificity are stimuli dependent.

from #Audiology via ola Kala on Inoreader http://ift.tt/2oZudkF
via IFTTT

The Role of Frequency in Learning Morphophonological Alternations: Implications for Children With Specific Language Impairment

Purpose
The aim of this article was to explore how the type of allomorph (e.g., past tense buzz[ d ] vs. nod[ əd ]) influences the ability to perceive and produce grammatical morphemes in children with typical development and with specific language impairment (SLI).
Method
The participants were monolingual Australian English–speaking children. The SLI group included 13 participants (mean age = 5;7 [years;months]); the control group included 19 children with typical development (mean age = 5;4). Both groups performed a grammaticality judgment and elicited production task with the same set of nonce verbs in third-person singular and past tense forms.
Results
Five-year-old children are still learning to generalize morphophonological patterns to novel verbs, and syllabic /əz/ and /əd/ allomorphs are significantly more challenging to produce, particularly for the SLI group. The greater phonetic content of these syllabic forms did not enhance perception.
Conclusions
Acquisition of morphophonological patterns involving low-frequency allomorphs is still underway in 5-year-old children with typical development, and it is even more protracted in SLI populations, despite these patterns being highly predictable. Children with SLI will therefore benefit from targeted intervention with low-frequency allomorphs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qOvXkF
via IFTTT

Anxiety in 11-Year-Old Children Who Stutter: Findings From a Prospective Longitudinal Community Sample

Purpose
To examine if a community sample of 11-year-old children with persistent stuttering have higher anxiety than children who have recovered from stuttering and nonstuttering controls.
Method
Participants in a community cohort study were categorized into 3 groups: (a) those with persistent stuttering, (b) those with recovered stuttering, and (c) nonstuttering controls. Linear regression modeling compared outcomes on measures of child anxiety and emotional and behavioral functioning for the 3 groups.
Results
Without adjustment for covariates (unadjusted analyses), the group with persistent stuttering showed significantly increased anxiety compared with the recovered stuttering group and nonstuttering controls. The group with persistent stuttering had a higher number of children with autism spectrum disorder and/or learning difficulties. Once these variables were included as covariates in subsequent analysis, there was no difference in anxiety, emotional and behavioral functioning, or temperament among groups.
Conclusion
Although recognized to be associated with stuttering in clinical samples, anxiety was not higher in school-age children who stutter in a community cohort. It may be that anxiety develops later or is less marked in community cohorts compared with clinical samples. We did, however, observe higher anxiety scores in those children who stuttered and had autism spectrum disorder or learning difficulties. Implications and recommendations for research are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2prUfjh
via IFTTT

Auditory Environment Across the Life Span of Cochlear Implant Users: Insights From Data Logging

Purpose
We describe the natural auditory environment of people with cochlear implants (CIs), how it changes across the life span, and how it varies between individuals.
Method
We performed a retrospective cross-sectional analysis of Cochlear Nucleus 6 CI sound-processor data logs. The logs were obtained from 1,501 people with CIs (ages 0–96 years). They covered over 2.4 million hr of implant use and indicated how much time the CI users had spent in various acoustical environments. We investigated exposure to spoken language, noise, music, and quiet, and analyzed variation between age groups, users, and countries.
Results
CI users spent a substantial part of their daily life in noisy environments. As a consequence, most speech was presented in background noise. We found significant differences between age groups for all auditory scenes. Yet even within the same age group and country, variability between individuals was substantial.
Conclusions
Regardless of their age, people with CIs face challenging acoustical environments in their daily life. Our results underline the importance of supporting them with assistive listening technology. Moreover, we found large differences between individuals' auditory diets that might contribute to differences in rehabilitation outcomes. Their causes and effects should be investigated further.

from #Audiology via ola Kala on Inoreader http://ift.tt/2onnAa7
via IFTTT

Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients

Purpose
The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers.
Method
Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally uttered words that were contrastive in lexical tones. For Task 2, a disyllabic word (yanjing) was manipulated orthogonally, varying in fundamental-frequency (F0) contours and duration patterns. Participants identified each token with the second syllable jing pronounced with Tone 1 (a high level tone) as eyes or with Tone 4 (a high falling tone) as eyeglasses.
Results
CI participants' recognition accuracy was significantly lower than NH listeners' in Task 1. In Task 2, CI participants' reliance on F0 contours was significantly less than that of NH listeners; their reliance on duration patterns, however, was significantly higher than that of NH listeners. Both CI and NH listeners' performance in Task 1 was significantly correlated with their reliance on F0 contours in Task 2.
Conclusion
For pediatric CI recipients, lexical-tone recognition using naturally uttered words is primarily related to their reliance on F0 contours, although duration patterns may be used as an additional cue.

from #Audiology via ola Kala on Inoreader http://ift.tt/2nTtErw
via IFTTT

Response to de Wit et al., 2016, “Characteristics of Auditory Processing Disorders: A Systematic Review”

Purpose
This letter to the editor is in response to a review by de Wit et al. (2016), “Characteristics of Auditory Processing Disorders: A Systematic Review,” published in April 2016 by Journal of Speech, Language, and Hearing Research.
Conclusion
The author argues that the conclusions in the de Wit et al. (2016) review are unfortunate in light of advances made in the clinical diagnosis and treatment of bottom-up auditory processing disorders in children.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qWPtr7
via IFTTT

Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing

Purpose
Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios.
Method
Speech perception in noise and AVWM were measured in 30 listeners (age range 31–67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition.
Results
After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions.
Conclusions
AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qUbB9g
via IFTTT

Recovery of Online Sentence Processing in Aphasia: Eye Movement Changes Resulting From Treatment of Underlying Forms

Purpose
The present study tested whether (and how) language treatment changed online sentence processing in individuals with aphasia.
Method
Participants with aphasia (n = 10) received a 12-week program of Treatment of Underlying Forms (Thompson & Shapiro, 2005) focused on production and comprehension of passive sentences. Before and after treatment, participants performed a sentence-picture matching task with active and passive sentences as eye movements were tracked. Twelve age-matched controls also performed the task once each.
Results
In the age-matched group, eye movements indicated agent-first predictive processing after hearing the subject noun, followed by rapid thematic reanalysis after hearing the verb form. Pretreatment eye movements in the participants with aphasia showed no predictive agent-first processing, and more accurate thematic analysis in active compared to passive sentences. After treatment, which resulted in improved offline passive sentence production and comprehension, participants were more likely to respond correctly when they made agent-first eye movements early in the sentence, showed equally reliable thematic analysis in active and passive sentences, and were less likely to use a spatially based alternative response strategy.
Conclusions
These findings suggest that treatment focused on improving sentence production and comprehension supports the emergence of more normal-like sentence comprehension processes.

from #Audiology via ola Kala on Inoreader http://ift.tt/2qFUo0t
via IFTTT

Language Development and Brain Magnetic Resonance Imaging Characteristics in Preschool Children With Cerebral Palsy

Purpose
The purpose of this study was to investigate characteristics of language development in relation to brain magnetic resonance imaging (MRI) characteristics and the other contributing factors to language development in children with cerebral palsy (CP).
Method
The study included 172 children with CP who underwent brain MRI and language assessments between 3 and 7 years of age. The MRI characteristics were categorized as normal, malformation, periventricular white matter lesion (PVWL), deep gray matter lesion, focal infarct, cortical/subcortical lesion, and others. Neurodevelopmental outcomes such as ambulatory status, manual ability, cognitive function, and accompanying impairments were assessed.
Results
Both receptive and expressive language development quotients (DQs) were significantly related to PVWL or deep gray matter lesion severity. In multivariable analysis, only cognitive function was significantly related to receptive language development, whereas ambulatory status and cognitive function were significantly associated with expressive language development. More than one third of the children had a language developmental discrepancy between receptive and expressive DQs. Children with cortical/subcortical lesions were at high risk for this discrepancy.
Conclusions
Cognitive function is a key factor for both receptive and expressive language development. In children with PVWL or deep gray matter lesion, lesion severity seems to be useful to predict language development.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pUDOLK
via IFTTT

Muscle Bioenergetic Considerations for Intrinsic Laryngeal Skeletal Muscle Physiology

Purpose
Intrinsic laryngeal skeletal muscle bioenergetics, the means by which muscles produce fuel for muscle metabolism, is an understudied aspect of laryngeal physiology with direct implications for voice habilitation and rehabilitation. The purpose of this review is to describe bioenergetic pathways identified in limb skeletal muscle and introduce bioenergetic physiology as a necessary parameter for theoretical models of laryngeal skeletal muscle function.
Method
A comprehensive review of the human intrinsic laryngeal skeletal muscle physiology literature was conducted. Findings regarding intrinsic laryngeal muscle fiber complement and muscle metabolism in human models are summarized and exercise physiology methodology is applied to identify probable bioenergetic pathways used for voice function.
Results
Intrinsic laryngeal skeletal muscle fibers described in human models support the fast, high-intensity physiological requirements of these muscles for biological functions of airway protection. Inclusion of muscle bioenergetic constructs in theoretical modeling of voice training, detraining, fatigue, and voice loading have been limited.
Conclusions
Muscle bioenergetics, a key component for muscle training, detraining, and fatigue models in exercise science, is a little-considered aspect of intrinsic laryngeal skeletal muscle physiology. Partnered with knowledge of occupation-specific voice requirements, application of bioenergetics may inform novel considerations for voice habilitation and rehabilitation.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pOQUru
via IFTTT

Apoptosis and Vocal Fold Disease: Clinically Relevant Implications of Epithelial Cell Death

Purpose
Vocal fold diseases affecting the epithelium have a detrimental impact on vocal function. This review article provides an overview of apoptosis, the most commonly studied type of programmed cell death. Because apoptosis can damage epithelial cells, this article examines the implications of apoptosis on diseases affecting the vocal fold cover.
Method
A review of the extant literature was performed. We summarized the topics of epithelial tissue properties and apoptotic cell death, described what is currently understood about apoptosis in the vocal fold, and proposed several possible explanations for how the role of abnormal apoptosis during wound healing may be involved in vocal pathology.
Results and Conclusions
Apoptosis plays an important role in maintaining normal epithelial tissue function. The biological mechanisms responsible for vocal fold diseases of epithelial origin are only beginning to emerge. This article discusses speculations to explain the potential role of deficient versus excessive rates of apoptosis and how disorganized apoptosis may contribute to the development of common diseases of the vocal folds.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pxGYBV
via IFTTT

Randomized Controlled Trial in Clinical Settings to Evaluate Effectiveness of Coping Skills Education Used With Progressive Tinnitus Management

Purpose
This randomized controlled trial evaluated, within clinical settings, the effectiveness of coping skills education that is provided with progressive tinnitus management (PTM).
Method
At 2 Veterans Affairs medical centers, N = 300 veterans were randomized to either PTM intervention or 6-month wait-list control. The PTM intervention involved 5 group workshops: 2 led by an audiologist (teaching how to use sound as therapy) and 3 by a psychologist (teaching coping skills derived from cognitive behavioral therapy). It was hypothesized that PTM would be more effective than wait-list control in reducing functional effects of tinnitus and that there would be no differences in effectiveness between sites.
Results
At both sites, a statistically significant improvement in mean Tinnitus Functional Index scores was seen at 6 months for the PTM group. Combined data across sites revealed a statistically significant improvement in Tinnitus Functional Index relative to wait-list control. The effect size for PTM using the Tinnitus Functional Index was 0.36 (small).
Conclusions
Results suggest that PTM is effective at reducing tinnitus-related functional distress in clinical settings. Although effect sizes were small, they provide evidence of clinical effectiveness of PTM in the absence of stringent research-related inclusion criteria and with a relatively small number of sessions of cognitive behavioral therapy.

from #Audiology via ola Kala on Inoreader http://ift.tt/2prZjEn
via IFTTT

Oral Language and Listening Comprehension: Same or Different Constructs?

Purpose
The purpose of this study was to add to our understanding of the dimensionality of oral language in children and to determine whether oral language and listening comprehension are separate constructs in children enrolled in preschool (PK) through 3rd grade.
Method
In the spring of the school year, children from 4 states (N = 1,869) completed multiple measures of oral language (i.e., expressive and receptive vocabulary and grammar) and listening comprehension as part of a larger study of the language bases of reading comprehension.
Results
Initial confirmatory factor analysis found evidence that measures of oral language and listening comprehension loaded on two separate factors in PK through 3rd grade; however, these factors were highly correlated at all grades.
Conclusions
These results suggest that oral language and listening comprehension are best characterized as a single oral language construct in PK through 3rd grade. The implications for early identification and intervention are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2pK5X8g
via IFTTT

Prevalence of Auditory Problems in Children With Feeding and Swallowing Disorders

Purpose
Although an interdisciplinary approach is recommended for assessment and management of feeding or swallowing difficulties, audiologists are not always included in the interdisciplinary team. The purpose of this study is to report the prevalence of middle ear and hearing problems in children with feeding and swallowing disorders and to compare this prevalence with that in typical children.
Method
A total of 103 children were included in the study: 44 children with feeding and swallowing disorders and 59 children without any such disorders. Audiological examinations included case-history information, visualization of the ear canals through otoscopy, middle ear evaluation through tympanometry, and hearing screenings using an audiometer.
Results
The odds of excessive cerumen (p = .0000, small effect size), middle ear dysfunction (p = .0148, small effect size), and hearing screening failure (p = .0000, large effect size) were 22.14%, 2.97%, and 13.5% higher, respectively, in children with feeding and swallowing disorders compared with typically developing children.
Conclusion
The significantly higher prevalence of hearing problems in children with feeding and swallowing disorders compared with typically developing children suggests that inclusion of an audiologist on the interdisciplinary team is likely to improve overall interventional outcomes for children with feeding and swallowing disorders.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ps2Nnn
via IFTTT

Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

Purpose
Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes.
Method
Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing.
Results
CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information.
Conclusion
CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oZtEHu
via IFTTT

Spoken Language Production in Young Adults: Examining Syntactic Complexity

Purpose
In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language impairment.
Method
Forty adults (mean age = 22 years, 10 months) with typical language development participated in an interview that consisted of 3 speaking tasks: a general conversation about common, everyday topics; a narrative retelling task that involved fables; and a question-and-answer, critical-thinking task about the fables. Each speaker's interview was audio-recorded, transcribed, broken into communication units, coded for main and subordinate clauses, entered into Systematic Analysis of Language Transcripts (Miller, Iglesias, & Nockerts, 2004), and analyzed for mean length of communication unit and clausal density.
Results
Both the narrative and critical-thinking tasks elicited significantly greater syntactic complexity than the conversational task. It was also found that syntactic complexity was significantly greater during the narrative task than the critical-thinking task.
Conclusion
Syntactic complexity was best revealed by a narrative task that involved fables. The study offers benchmarks for language development during early adulthood.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2qTxLss
via IFTTT

Efficacy of Visual–Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study

Purpose
This study documented the efficacy of visual–acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of sessions to treatment conditions.
Method
Seven child and adolescent participants received 20 half-hour sessions of individual treatment over 10 weeks. Within each week, sessions were randomly assigned to feature traditional or biofeedback intervention. Perceptual accuracy of rhotic production was assessed in a blinded, randomized fashion. Each participant's response to the combined treatment package was evaluated by using effect sizes and visual inspection. Differences in the magnitude of response to traditional versus biofeedback intervention were measured with individual randomization tests.
Results
Four of 7 participants demonstrated a clinically meaningful response to the combined treatment package. Three of 7 participants showed a statistically significant difference between treatment conditions. In all 3 cases, the magnitude of within-session gains associated with biofeedback exceeded the gains associated with traditional treatment.
Conclusions
These results suggest that the inclusion of visual–acoustic biofeedback can enhance the efficacy of intervention for some individuals with residual rhotic errors. Further research is needed to understand which participants represent better or poorer candidates for biofeedback treatment.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ojCXDV
via IFTTT

Normative Study of Wideband Acoustic Immittance Measures in Newborn Infants

Objective
The purpose of this study was to describe normative aspects of wideband acoustic immittance (WAI) measures obtained from healthy White neonates.
Method
In this cross-sectional study, wideband absorbance (WBA), admittance magnitude, and admittance phase were measured under ambient pressure condition in 326 ears from 203 neonates (M age = 45.9 hr) who passed a battery of tests, including automated auditory brainstem response, high-frequency tympanometry, and distortion product otoacoustic emissions.
Results
Normative WBA data were in agreement with most previous studies. Normative data for both WBA and admittance magnitude revealed double-peaked patterns with the 1st peak at 1.25–2 kHz and the 2nd peak at 5–8 kHz, while normative admittance phase data showed 2 peaks at 0.8 and 4 kHz. There were no significant differences between ears or gender for the 3 WAI measures. Standard deviations for all 3 measures were highest at frequencies above 4 kHz.
Conclusions
The 3 WAI measures between 1 kHz and 4 kHz may provide the most stable response of the outer and middle ear. WAI measures at frequencies above 4 kHz were more variable. The normative data established in the present study may serve as a reference for evaluating outer and middle ear function in neonates.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ojBPjm
via IFTTT

Speech Inconsistency in Children With Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

Purpose
The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and speech delay.
Method
Participants included 48 children ranging between 4;7 to 17;8 (years;months) with CAS (n = 10), CAS + language impairment (n = 10), speech delay (n = 10), language impairment (n = 9), or typical development (n = 9). Speech inconsistency was assessed at phonemic and token-to-token levels using a variety of stimuli.
Results
Children with CAS and CAS + language impairment performed equivalently on all inconsistency assessments. Children with language impairment evidenced high levels of speech inconsistency on the phrase “buy Bobby a puppy.” Token-to-token inconsistency of monosyllabic words and the phrase “buy Bobby a puppy” was sensitive and specific in differentiating children with CAS and speech delay, whereas inconsistency calculated on other stimuli (e.g., multisyllabic words) was less efficacious in differentiating between these disorders.
Conclusions
Speech inconsistency is a core feature of CAS and is efficacious in differentiating between children with CAS and speech delay; however, sensitivity and specificity are stimuli dependent.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2oZudkF
via IFTTT

The Role of Frequency in Learning Morphophonological Alternations: Implications for Children With Specific Language Impairment

Purpose
The aim of this article was to explore how the type of allomorph (e.g., past tense buzz[ d ] vs. nod[ əd ]) influences the ability to perceive and produce grammatical morphemes in children with typical development and with specific language impairment (SLI).
Method
The participants were monolingual Australian English–speaking children. The SLI group included 13 participants (mean age = 5;7 [years;months]); the control group included 19 children with typical development (mean age = 5;4). Both groups performed a grammaticality judgment and elicited production task with the same set of nonce verbs in third-person singular and past tense forms.
Results
Five-year-old children are still learning to generalize morphophonological patterns to novel verbs, and syllabic /əz/ and /əd/ allomorphs are significantly more challenging to produce, particularly for the SLI group. The greater phonetic content of these syllabic forms did not enhance perception.
Conclusions
Acquisition of morphophonological patterns involving low-frequency allomorphs is still underway in 5-year-old children with typical development, and it is even more protracted in SLI populations, despite these patterns being highly predictable. Children with SLI will therefore benefit from targeted intervention with low-frequency allomorphs.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2qOvXkF
via IFTTT