Δευτέρα 21 Δεκεμβρίου 2015

Single-Sided Deafness, Cochlear Implants, and Speech Understanding

Zeitler et al (2015) reported on nine people (ages 12 to 63 years) with single-sided deafness (SSD) and normal hearing in the other ear, all of whom underwent cochlear implantation in the SSD ear. With regard to post-op speech understanding in noise, the authors report “one of our aims was to assess the value of a CI for SSD patients when the listening environment simulated a ‘real world’ situation, that is, listening in a restaurant where the talker was on the side of the CI.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1PhfWqE
via IFTTT

Variations on a Theme: Mild Hearing Loss and Word Recognition Scores

Timmer et al (2015) report that the prevalence rate of mild hearing impairment varies greatly with the definition. They report that the weak correlations between audiologic assessments and patient-based self-reported difficulties indicate further assessment of individuals with mild hearing impairment is warranted.  In their Table 2 (page 788) they offer a “summary of descriptive classifications of mild hearing impairment” which includes similar, although different, common definitions of mild hearing loss.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TcUXX2
via IFTTT

Can Tinnitus Go Away

If you have tinnitus, you may want nothing more than for your hearing condition to go away. Tinnitus is sometimes confused with hearing loss, but it is not related to loss of hearing. Instead, it is most aptly described as a ringing sound in the ears. People who have tinnitus may describe the sound in different ways, and it may not always be heard as a ring. For example, it may sound like a hiss, a chirp, a buzz, a whistle or other types of sounds. These sounds may be continuous or intermittent, and they may seem to be very loud or soft. If you have this condition, you may wonder can tinnitus go away, and you may be curious to know if you can live a normal life with regular hearing again.

Determining the Cause of Tinnitus
When you want to know more about can tinnitus go away, you should be aware that this is a symptom that is usually caused by an underlying condition, so your doctor may conduct tests to determine what is causing the condition as a first step in treatment. Everything from the natural aging process and exposure to loud sounds to interactions with some drugs, ear blockages and some diseases can cause this condition. In many cases, treating the underlying condition can help tinnitus to go away, so your doctor may run tests as an initial step in developing a treatment plan.

Other Treatments
If you want to know if tinnitus can go away, it is important to note that there are different treatments available. When there is not an underlying condition causing it, such as when tinnitus is caused by the natural aging process, you may need to experiment with different treatments to find one that works for you. This can include the use of ear drops, surgery, anti-anxiety medication, tonal therapy and more. Many people will try the non-pharmaceutical and non-surgical options first before using medications and surgery as treatments.

The answer to your question of can tinnitus go away is yes, but many people will unfortunately have to try several treatments before they discover one that provides effective relief for their condition. If you believe that you have tinnitus, it is important to seek a diagnosis and to determine the underlying cause as a first step. Your doctor can help you with this process, and the underlying cause may provide some indication regarding the best treatments to try firs



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TdSRpY
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader http://ift.tt/1U0wgNu
via IFTTT

Erratum



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1U0wgNu
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader http://ift.tt/1U0wgNu
via IFTTT

fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants.

Objectives: Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Design: Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1OGQ6te
via IFTTT

fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants.

Objectives: Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Design: Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were

from #Audiology via ola Kala on Inoreader http://ift.tt/1OGQ6te
via IFTTT

fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants.

Objectives: Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Design: Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were

from #Audiology via ola Kala on Inoreader http://ift.tt/1OGQ6te
via IFTTT

Context effects on second-language learning of tonal contrasts

Studies of lexical tonelearning generally focus on monosyllabic contexts, while reports of phonetic learning benefits associated with input variability are based largely on experienced learners. This study trained inexperienced learners on Mandarin tonal contrasts to test two hypotheses regarding the influence of context and variability on tonelearning. The first hypothesis was that increased phonetic variability of tones in disyllabic contexts makes initial tonelearning more challenging in disyllabic than monosyllabic words. The second hypothesis was that the learnability of a given tone varies across contexts due to differences in tonal variability. Results of a word learning experiment supported both hypotheses: tones were acquired less successfully in disyllables than in monosyllables, and the relative difficulty of disyllables was closely related to contextual tonal variability. These results indicate limited relevance of monosyllable-based data on Mandarin learning for the disyllabic majority of the Mandarin lexicon. Furthermore, in the short term, variability can diminish learning; its effects are not necessarily beneficial but dependent on acquisition stage and other learner characteristics. These findings thus highlight the importance of considering contextual variability and the interaction between variability and type of learner in the design, interpretation, and application of research on phonetic learning.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QUtzxQ
via IFTTT

Classification of underwater targets from autonomous underwater vehicle sampled bistatic acoustic scattered fields

One of the long term goals of autonomous underwater vehicle(AUV) minehunting is to have multiple inexpensive AUVs in a harbor autonomously classify hazards. Existing acoustic methods for target classification using AUV-based sensing, such as sidescan and synthetic aperture sonar, require an expensive payload on each outfitted vehicle and post-processing and/or image interpretation. A vehicle payload and machine learning classification methodology using bistatic angle dependence of target scattering amplitudes between a fixed acoustic source and target has been developed for onboard, fully autonomous classification with lower cost-per-vehicle. To achieve the high-quality, densely sampled three-dimensional (3D) bistatic scattering data required by this research, vehicle sampling behaviors and an acoustic payload for precision timed data acquisition with a 16 element nose array were demonstrated. 3D bistatic scattered field data were collected by an AUV around spherical and cylindrical targets insonified by a 7–9 kHz fixed source. The collected data were compared to simulated scatteringmodels. Classification and confidence estimation were shown for the sphere versus cylinder case on the resulting real and simulated bistatic amplitude data. The final models were used for classification of simulated targets in real time in the LAMSS MOOS-IvP simulation package [M. Benjamin, H. Schmidt, P. Newman, and J. Leonard, J. Field Rob. 27, 834–875 (2010)].



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZleIPW
via IFTTT

Experimental verification of acoustic trace wavelength enhancement

Directivity is essentially a measure of a sonar array's beamwidth that can be obtained in a spherically isotropic ambient noise field; narrow array mainbeam widths are more directive than broader mainbeam widths. For common sonar systems, the directivity factor (or directivity index) is directly proportional to the ratio of an incident acoustic trace wavelength to the sonar array's physical length (which is always constrained). Increasing this ratio, by creating additional trace wavelengths for a fixed array length, will increase array directivity. Embedding periodic structures within an array generates Bragg scattering of the incident acoustic plane wave along the array's surface. The Bragg scattered propagating waves are shifted in a precise manner and create shorter wavelength replicas of the original acoustic trace wavelength. These replicated trace wavelengths (which contain identical signal arrival information) increase an array's wavelength to length ratio and thus directivity. Therefore, a smaller array, in theory, can have the equivalent directivity of a much larger array. Measurements completed in January 2015 at the Naval Undersea Warfare Center's Acoustic Test Facility, in Newport, RI, verified, near perfectly, these replicated, shorter, trace wavelengths.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZleIPO
via IFTTT

Comparisons among ten models of acoustic backscattering used in aquatic ecosystem research

Analytical and numerical scatteringmodels with accompanying digital representations are used increasingly to predict acoustic backscatter by fish and zooplankton in research and ecosystem monitoring applications. Ten such models were applied to targets with simple geometric shapes and parameterized (e.g., size and material properties) to represent biological organisms such as zooplankton and fish, and their predictions of acoustic backscatter were compared to those from exact or approximate analytical models, i.e., benchmarks. These comparisons were made for a sphere, spherical shell, prolate spheroid, and finite cylinder, each with homogeneous composition. For each shape, four target boundary conditions were considered: rigid-fixed, pressure-release, gas-filled, and weakly scattering. Target strength (dB re 1 m2) was calculated as a function of insonifying frequency (f = 12 to 400 kHz) and angle of incidence (θ = 0° to 90°). In general, the numerical models (i.e., boundary- and finite-element) matched the benchmarks over the full range of simulation parameters. While inherent errors associated with the approximate analytical models were illustrated, so were the advantages as they are computationally efficient and in certain cases, outperformed the numerical models under conditions where the numerical models did not converge.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QUtzhi
via IFTTT

Ocean acoustic tomography from different receiver geometries using the adjoint method

In this paper, an ocean acoustictomography inversion using the adjoint method in a shallow water environment is presented. The propagation model used is an implicit Crank–Nicolson finite difference parabolic equation solver with a non-local boundary condition. Unlike previous matched-field processing works using the complex pressure fields as the observations, here, the observed signals are the transmission losses. Based on the code tests of the tangent linear model, the adjoint model, and the gradient, the optimization problem is solved by a gradient-based minimization algorithm. The inversions are performed in numerical simulations for two geometries: one in which hydrophones are sparsely distributed in the horizontal direction, and another in which the hydrophones are distributed vertically. The spacing in both cases is well beyond the half-wavelength threshold at which beamforming could be used. To deal with the ill-posedness of the inverse problem, a linear differential regularization operator of the sound-speed profile is used to smooth the inversion results. The L-curve criterion is adopted to select the regularization parameter, and the optimal value can be easily determined at the elbow of the logarithms of the residual norm of the measured–predicted fields and the norm of the penalty function.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZleIzp
via IFTTT

Optimizing swept-tone protocols for recording distortion-product otoacoustic emissions in adults and newborns

Distortion-product otoacoustic emissions (DPOAEs), which are routinely used in the audiology clinic and research laboratory, are conventionally recorded with discrete tones presented sequentially across frequency. However, a more efficient technique sweeps tones smoothly across frequency and applies a least-squares-fitting (LSF) procedure to compute estimates of otoacoustic emission phase and amplitude. In this study, the optimal parameters (i.e., sweep rate and duration of the LSF analysis window) required to record and analyze swept-tone DPOAEs were tested and defined in 15 adults and 10 newborns. Results indicate that optimal recording of swept-tone DPOAEs requires use of an appropriate analysis bandwidth, defined as the range of frequencies included in each least squares fit model. To achieve this, the rate at which the tones are swept and the length of the LSF analysis window must be carefully considered and changed in concert. Additionally, the optimal analysis bandwidth must be adjusted to accommodate frequency-dependent latency shifts in the reflection-component of the DPOAE. Parametric guidelines established here are equally applicable to adults and newborns. However, elevated noise during newborn swept-tone DPOAE recordings warrants protocol adaptations to improve signal-to-noise ratio and response quality.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ZleGYb
via IFTTT

Vocalic correlates of pitch in whispered versus normal speech

In whispered speech, the fundamental frequency is absent as a main cue to pitch. This study investigated how different pitch targets can acoustically be coded in whispered relative to normal speech. Secondary acoustic correlates that are found in normal speech may be preserved in whisper. Alternatively, whispering speakers may provide compensatory information. Compared to earlier studies, a more comprehensive set of acoustic correlates (duration, intensity, formants, center-of-gravity, spectral balance) and a larger set of materials were included. To elicit maximal acoustic differences among the low, mid, and high pitch targets, linguistic and semantic load were minimized: 12 native Dutch speakers produced the point vowels (/a, i, u/) in nonsense vowel-consonant-vowel targets (with C = {/s/, /f/}). Acoustic analyses showed that in addition to systematic changes in formants, which have been reported before, also center of gravity, spectral balance, and intensity varied with pitch target, both in whispered and normal speech. Some acoustic correlates differed more in whispered than in normal speech, suggesting that speakers can adopt a compensatory strategy when coding pitch in the speech mode lacking the main cue. Speakers furthermore varied in the extent to which particular correlates were used, and in the combination of correlates they altered systematically.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QUtz0E
via IFTTT