Τετάρτη 23 Αυγούστου 2017

Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss.

Objectives: The first objective was to determine the relationship between speech level, noise level, and signal to noise ratio (SNR), as well as the distribution of SNR, in real-world situations wherein older adults with hearing loss are listening to speech. The second objective was to develop a set of prototype listening situations (PLSs) that describe the speech level, noise level, SNR, availability of visual cues, and locations of speech and noise sources of typical speech listening situations experienced by these individuals. Design: Twenty older adults with mild to moderate hearing loss carried digital recorders for 5 to 6 weeks to record sounds for 10 hours per day. They also repeatedly completed in situ surveys on smartphones several times per day to report the characteristics of their current environments, including the locations of the primary talker (if they were listening to speech) and noise source (if it was noisy) and the availability of visual cues. For surveys where speech listening was indicated, the corresponding audio recording was examined. Speech-plus-noise and noise-only segments were extracted, and the SNR was estimated using a power subtraction technique. SNRs and the associated survey data were subjected to cluster analysis to develop PLSs. Results: The speech level, noise level, and SNR of 894 listening situations were analyzed to address the first objective. Results suggested that as noise levels increased from 40 to 74 dBA, speech levels systematically increased from 60 to 74 dBA, and SNR decreased from 20 to 0 dB. Most SNRs (62.9%) of the collected recordings were between 2 and 14 dB. Very noisy situations that had SNRs below 0 dB comprised 7.5% of the listening situations. To address the second objective, recordings and survey data from 718 observations were analyzed. Cluster analysis suggested that the participants' daily listening situations could be grouped into 12 clusters (i.e., 12 PLSs). The most frequently occurring PLSs were characterized as having the talker in front of the listener with visual cues available, either in quiet or in diffuse noise. The mean speech level of the PLSs that described quiet situations was 62.8 dBA, and the mean SNR of the PLSs that represented noisy environments was 7.4 dB (speech = 67.9 dBA). A subset of observations (n = 280), which was obtained by excluding the data collected from quiet environments, was further used to develop PLSs that represent noisier situations. From this subset, two PLSs were identified. These two PLSs had lower SNRs (mean = 4.2 dB), but the most frequent situations still involved speech from in front of the listener in diffuse noise with visual cues available. Conclusions: The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2wA71QY
via IFTTT

Effects of High Sound Exposure During Air-Conducted Vestibular Evoked Myogenic Potential Testing in Children and Young Adults.

Objectives: Vestibular evoked myogenic potential (VEMP) testing is increasingly utilized in pediatric vestibular evaluations due to its diagnostic capability to identify otolith dysfunction and feasibility of testing. However, there is evidence demonstrating that the high-intensity stimulation level required to elicit a reliable VEMP response causes acoustic trauma in adults. Despite utility of VEMP testing in children, similar findings are unknown. It is hypothesized that increased sound exposure may exist in children because differences in ear-canal volume (ECV) compared with adults, and the effect of stimulus parameters (e.g., signal duration and intensity) will alter exposure levels delivered to a child's ear. The objectives of this study are to (1) measure peak to peak equivalent sound pressure levels (peSPL) in children with normal hearing (CNH) and young adults with normal hearing (ANH) using high-intensity VEMP stimuli, (2) determine the effect of ECV on peSPL and calculate a safe exposure level for VEMP, and (3) assess whether cochlear changes exist after VEMP exposure. Design: This was a 2-phase approach. Fifteen CNH and 12 ANH participated in phase I. Equivalent ECV was measured. In 1 ear, peSPL was recorded for 5 seconds at 105 to 125 dB SPL, in 5-dB increments for 500- and 750-Hz tone bursts. Recorded peSPL values (accounting for stimulus duration) were then used to calculate safe sound energy exposure values for VEMP testing using the 132-dB recommended energy allowance from the 2003 European Union Guidelines. Fifteen CNH and 10 ANH received cervical and ocular VEMP testing in 1 ear in phase II. Subjects completed tympanometry, pre- and postaudiometric threshold testing, distortion product otoacoustic emissions, and questionnaire addressing subjective otologic symptoms to study the effect of VEMP exposure on cochlear function. Results: (1) In response to high-intensity stimulation levels (e.g., 125 dB SPL), CNH had significantly higher peSPL measurements and smaller ECVs compared with ANH. (2) A significant linear relationship between equivalent ECV (as measured by diagnostic tympanometry) and peSPL exists and has an effect on total sound energy exposure level; based on data from phase I, 120 dB SPL was determined to be an acoustically safe stimulation level for testing in children. (3) Using calculated safe stimulation level for VEMP testing, there were no significant effect of VEMP exposure on cochlear function (as measured by audiometric thresholds, distortion product otoacoustic emission amplitude levels, or subjective symptoms) in CNH and ANH. Conclusions: peSPL sound recordings in children's ears are significantly higher (~3 dB) than that in adults in response to high-intensity VEMP stimuli that are commonly practiced. Equivalent ECV contributes to peSPL delivered to the ear during VEMP testing and should be considered to determine safe acoustic VEMP stimulus parameters; children with smaller ECVs are at risk for unsafe sound exposure during routine VEMP testing, and stimuli should not exceed 120 dB SPL. Using 120 dB SPL stimulus level for children during VEMP testing yields no change to cochlear function and reliable VEMP responses. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2xuju50
via IFTTT

Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss.

Objectives: The first objective was to determine the relationship between speech level, noise level, and signal to noise ratio (SNR), as well as the distribution of SNR, in real-world situations wherein older adults with hearing loss are listening to speech. The second objective was to develop a set of prototype listening situations (PLSs) that describe the speech level, noise level, SNR, availability of visual cues, and locations of speech and noise sources of typical speech listening situations experienced by these individuals. Design: Twenty older adults with mild to moderate hearing loss carried digital recorders for 5 to 6 weeks to record sounds for 10 hours per day. They also repeatedly completed in situ surveys on smartphones several times per day to report the characteristics of their current environments, including the locations of the primary talker (if they were listening to speech) and noise source (if it was noisy) and the availability of visual cues. For surveys where speech listening was indicated, the corresponding audio recording was examined. Speech-plus-noise and noise-only segments were extracted, and the SNR was estimated using a power subtraction technique. SNRs and the associated survey data were subjected to cluster analysis to develop PLSs. Results: The speech level, noise level, and SNR of 894 listening situations were analyzed to address the first objective. Results suggested that as noise levels increased from 40 to 74 dBA, speech levels systematically increased from 60 to 74 dBA, and SNR decreased from 20 to 0 dB. Most SNRs (62.9%) of the collected recordings were between 2 and 14 dB. Very noisy situations that had SNRs below 0 dB comprised 7.5% of the listening situations. To address the second objective, recordings and survey data from 718 observations were analyzed. Cluster analysis suggested that the participants' daily listening situations could be grouped into 12 clusters (i.e., 12 PLSs). The most frequently occurring PLSs were characterized as having the talker in front of the listener with visual cues available, either in quiet or in diffuse noise. The mean speech level of the PLSs that described quiet situations was 62.8 dBA, and the mean SNR of the PLSs that represented noisy environments was 7.4 dB (speech = 67.9 dBA). A subset of observations (n = 280), which was obtained by excluding the data collected from quiet environments, was further used to develop PLSs that represent noisier situations. From this subset, two PLSs were identified. These two PLSs had lower SNRs (mean = 4.2 dB), but the most frequent situations still involved speech from in front of the listener in diffuse noise with visual cues available. Conclusions: The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2wA71QY
via IFTTT

Effects of High Sound Exposure During Air-Conducted Vestibular Evoked Myogenic Potential Testing in Children and Young Adults.

Objectives: Vestibular evoked myogenic potential (VEMP) testing is increasingly utilized in pediatric vestibular evaluations due to its diagnostic capability to identify otolith dysfunction and feasibility of testing. However, there is evidence demonstrating that the high-intensity stimulation level required to elicit a reliable VEMP response causes acoustic trauma in adults. Despite utility of VEMP testing in children, similar findings are unknown. It is hypothesized that increased sound exposure may exist in children because differences in ear-canal volume (ECV) compared with adults, and the effect of stimulus parameters (e.g., signal duration and intensity) will alter exposure levels delivered to a child's ear. The objectives of this study are to (1) measure peak to peak equivalent sound pressure levels (peSPL) in children with normal hearing (CNH) and young adults with normal hearing (ANH) using high-intensity VEMP stimuli, (2) determine the effect of ECV on peSPL and calculate a safe exposure level for VEMP, and (3) assess whether cochlear changes exist after VEMP exposure. Design: This was a 2-phase approach. Fifteen CNH and 12 ANH participated in phase I. Equivalent ECV was measured. In 1 ear, peSPL was recorded for 5 seconds at 105 to 125 dB SPL, in 5-dB increments for 500- and 750-Hz tone bursts. Recorded peSPL values (accounting for stimulus duration) were then used to calculate safe sound energy exposure values for VEMP testing using the 132-dB recommended energy allowance from the 2003 European Union Guidelines. Fifteen CNH and 10 ANH received cervical and ocular VEMP testing in 1 ear in phase II. Subjects completed tympanometry, pre- and postaudiometric threshold testing, distortion product otoacoustic emissions, and questionnaire addressing subjective otologic symptoms to study the effect of VEMP exposure on cochlear function. Results: (1) In response to high-intensity stimulation levels (e.g., 125 dB SPL), CNH had significantly higher peSPL measurements and smaller ECVs compared with ANH. (2) A significant linear relationship between equivalent ECV (as measured by diagnostic tympanometry) and peSPL exists and has an effect on total sound energy exposure level; based on data from phase I, 120 dB SPL was determined to be an acoustically safe stimulation level for testing in children. (3) Using calculated safe stimulation level for VEMP testing, there were no significant effect of VEMP exposure on cochlear function (as measured by audiometric thresholds, distortion product otoacoustic emission amplitude levels, or subjective symptoms) in CNH and ANH. Conclusions: peSPL sound recordings in children's ears are significantly higher (~3 dB) than that in adults in response to high-intensity VEMP stimuli that are commonly practiced. Equivalent ECV contributes to peSPL delivered to the ear during VEMP testing and should be considered to determine safe acoustic VEMP stimulus parameters; children with smaller ECVs are at risk for unsafe sound exposure during routine VEMP testing, and stimuli should not exceed 120 dB SPL. Using 120 dB SPL stimulus level for children during VEMP testing yields no change to cochlear function and reliable VEMP responses. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader http://ift.tt/2xuju50
via IFTTT

Characteristics of Real-World Signal to Noise Ratios and Speech Listening Situations of Older Adults With Mild to Moderate Hearing Loss.

Objectives: The first objective was to determine the relationship between speech level, noise level, and signal to noise ratio (SNR), as well as the distribution of SNR, in real-world situations wherein older adults with hearing loss are listening to speech. The second objective was to develop a set of prototype listening situations (PLSs) that describe the speech level, noise level, SNR, availability of visual cues, and locations of speech and noise sources of typical speech listening situations experienced by these individuals. Design: Twenty older adults with mild to moderate hearing loss carried digital recorders for 5 to 6 weeks to record sounds for 10 hours per day. They also repeatedly completed in situ surveys on smartphones several times per day to report the characteristics of their current environments, including the locations of the primary talker (if they were listening to speech) and noise source (if it was noisy) and the availability of visual cues. For surveys where speech listening was indicated, the corresponding audio recording was examined. Speech-plus-noise and noise-only segments were extracted, and the SNR was estimated using a power subtraction technique. SNRs and the associated survey data were subjected to cluster analysis to develop PLSs. Results: The speech level, noise level, and SNR of 894 listening situations were analyzed to address the first objective. Results suggested that as noise levels increased from 40 to 74 dBA, speech levels systematically increased from 60 to 74 dBA, and SNR decreased from 20 to 0 dB. Most SNRs (62.9%) of the collected recordings were between 2 and 14 dB. Very noisy situations that had SNRs below 0 dB comprised 7.5% of the listening situations. To address the second objective, recordings and survey data from 718 observations were analyzed. Cluster analysis suggested that the participants' daily listening situations could be grouped into 12 clusters (i.e., 12 PLSs). The most frequently occurring PLSs were characterized as having the talker in front of the listener with visual cues available, either in quiet or in diffuse noise. The mean speech level of the PLSs that described quiet situations was 62.8 dBA, and the mean SNR of the PLSs that represented noisy environments was 7.4 dB (speech = 67.9 dBA). A subset of observations (n = 280), which was obtained by excluding the data collected from quiet environments, was further used to develop PLSs that represent noisier situations. From this subset, two PLSs were identified. These two PLSs had lower SNRs (mean = 4.2 dB), but the most frequent situations still involved speech from in front of the listener in diffuse noise with visual cues available. Conclusions: The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2wA71QY
via IFTTT

Effects of High Sound Exposure During Air-Conducted Vestibular Evoked Myogenic Potential Testing in Children and Young Adults.

Objectives: Vestibular evoked myogenic potential (VEMP) testing is increasingly utilized in pediatric vestibular evaluations due to its diagnostic capability to identify otolith dysfunction and feasibility of testing. However, there is evidence demonstrating that the high-intensity stimulation level required to elicit a reliable VEMP response causes acoustic trauma in adults. Despite utility of VEMP testing in children, similar findings are unknown. It is hypothesized that increased sound exposure may exist in children because differences in ear-canal volume (ECV) compared with adults, and the effect of stimulus parameters (e.g., signal duration and intensity) will alter exposure levels delivered to a child's ear. The objectives of this study are to (1) measure peak to peak equivalent sound pressure levels (peSPL) in children with normal hearing (CNH) and young adults with normal hearing (ANH) using high-intensity VEMP stimuli, (2) determine the effect of ECV on peSPL and calculate a safe exposure level for VEMP, and (3) assess whether cochlear changes exist after VEMP exposure. Design: This was a 2-phase approach. Fifteen CNH and 12 ANH participated in phase I. Equivalent ECV was measured. In 1 ear, peSPL was recorded for 5 seconds at 105 to 125 dB SPL, in 5-dB increments for 500- and 750-Hz tone bursts. Recorded peSPL values (accounting for stimulus duration) were then used to calculate safe sound energy exposure values for VEMP testing using the 132-dB recommended energy allowance from the 2003 European Union Guidelines. Fifteen CNH and 10 ANH received cervical and ocular VEMP testing in 1 ear in phase II. Subjects completed tympanometry, pre- and postaudiometric threshold testing, distortion product otoacoustic emissions, and questionnaire addressing subjective otologic symptoms to study the effect of VEMP exposure on cochlear function. Results: (1) In response to high-intensity stimulation levels (e.g., 125 dB SPL), CNH had significantly higher peSPL measurements and smaller ECVs compared with ANH. (2) A significant linear relationship between equivalent ECV (as measured by diagnostic tympanometry) and peSPL exists and has an effect on total sound energy exposure level; based on data from phase I, 120 dB SPL was determined to be an acoustically safe stimulation level for testing in children. (3) Using calculated safe stimulation level for VEMP testing, there were no significant effect of VEMP exposure on cochlear function (as measured by audiometric thresholds, distortion product otoacoustic emission amplitude levels, or subjective symptoms) in CNH and ANH. Conclusions: peSPL sound recordings in children's ears are significantly higher (~3 dB) than that in adults in response to high-intensity VEMP stimuli that are commonly practiced. Equivalent ECV contributes to peSPL delivered to the ear during VEMP testing and should be considered to determine safe acoustic VEMP stimulus parameters; children with smaller ECVs are at risk for unsafe sound exposure during routine VEMP testing, and stimuli should not exceed 120 dB SPL. Using 120 dB SPL stimulus level for children during VEMP testing yields no change to cochlear function and reliable VEMP responses. Copyright (C) 2017 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xuju50
via IFTTT

Detecting and Learning New Words: The Impact of Advancing Age and Hearing Loss

Purpose
Lexical acquisition was examined in children and adults to determine if the skills needed to detect and learn new words are retained in the adult years. In addition to advancing age, the effects of hearing loss were also examined.
Method
Measures of word recognition, detection of nonsense words within sentences, and novel word learning were obtained in quiet for 20 children with normal hearing and 21 with hearing loss (8–12 years) as well as for 15 adults with normal hearing and 17 with hearing loss (58–79 years). Listeners with hearing loss were tested with and without high-frequency acoustic energy to identify the type of amplification (narrowband, wideband, or frequency lowering) that yielded optimal performance.
Results
No differences were observed between the adults and children with normal hearing except for the adults' better nonsense word detection. The poorest performance was observed for the listeners with hearing loss in the unaided condition. Performance improved significantly with amplification to levels at or near that of their counterparts with normal hearing. With amplification, the adults performed as well as the children on all tasks except for word recognition.
Conclusions
Adults retain the skills necessary for lexical acquisition regardless of hearing status. However, uncorrected hearing loss nearly eliminates these skills.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-17-0025/2652560/Detecting-and-Learning-New-Words-The-Impact-of
via IFTTT

Detecting and Learning New Words: The Impact of Advancing Age and Hearing Loss

Purpose
Lexical acquisition was examined in children and adults to determine if the skills needed to detect and learn new words are retained in the adult years. In addition to advancing age, the effects of hearing loss were also examined.
Method
Measures of word recognition, detection of nonsense words within sentences, and novel word learning were obtained in quiet for 20 children with normal hearing and 21 with hearing loss (8–12 years) as well as for 15 adults with normal hearing and 17 with hearing loss (58–79 years). Listeners with hearing loss were tested with and without high-frequency acoustic energy to identify the type of amplification (narrowband, wideband, or frequency lowering) that yielded optimal performance.
Results
No differences were observed between the adults and children with normal hearing except for the adults' better nonsense word detection. The poorest performance was observed for the listeners with hearing loss in the unaided condition. Performance improved significantly with amplification to levels at or near that of their counterparts with normal hearing. With amplification, the adults performed as well as the children on all tasks except for word recognition.
Conclusions
Adults retain the skills necessary for lexical acquisition regardless of hearing status. However, uncorrected hearing loss nearly eliminates these skills.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_AJA-17-0025/2652560/Detecting-and-Learning-New-Words-The-Impact-of
via IFTTT

Detecting and Learning New Words: The Impact of Advancing Age and Hearing Loss

Purpose
Lexical acquisition was examined in children and adults to determine if the skills needed to detect and learn new words are retained in the adult years. In addition to advancing age, the effects of hearing loss were also examined.
Method
Measures of word recognition, detection of nonsense words within sentences, and novel word learning were obtained in quiet for 20 children with normal hearing and 21 with hearing loss (8–12 years) as well as for 15 adults with normal hearing and 17 with hearing loss (58–79 years). Listeners with hearing loss were tested with and without high-frequency acoustic energy to identify the type of amplification (narrowband, wideband, or frequency lowering) that yielded optimal performance.
Results
No differences were observed between the adults and children with normal hearing except for the adults' better nonsense word detection. The poorest performance was observed for the listeners with hearing loss in the unaided condition. Performance improved significantly with amplification to levels at or near that of their counterparts with normal hearing. With amplification, the adults performed as well as the children on all tasks except for word recognition.
Conclusions
Adults retain the skills necessary for lexical acquisition regardless of hearing status. However, uncorrected hearing loss nearly eliminates these skills.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-17-0025/2652560/Detecting-and-Learning-New-Words-The-Impact-of
via IFTTT

A Re-examination of the Effect of Masker Phase Curvature on Non-simultaneous Masking

Abstract

Forward masking of a sinusoidal signal is determined not only by the masker’s power spectrum but also by its phase spectrum. Specifically, when the phase spectrum is such that the output of an auditory filter centred on the signal has a highly modulated (“peaked”) envelope, there is less masking than when that envelope is flat. This finding has been attributed to non-linearities, such as compression, reducing the average neural response to maskers that produce more peaked auditory filter outputs (Carlyon and Datta, J Acoust Soc Am 101:3636–3647, 1997). Here we evaluate an alternative explanation proposed by Wotcjzak and Oxenham (Wojtczak and Oxenham, J Assoc Res Otolaryngol 10:595–607, 2009). They reported a masker phase effect for 6-kHz signals when the masker components were at least an octave below the signal frequency. Wotcjzak and Oxenham argued that this effect was inconsistent with cochlear compression, and, because it did not occur at lower signal frequencies, was also inconsistent with more central compression. It was instead attributed to activation of the efferent system reducing the response to the subsequent probe. Here, experiment 1 replicated their main findings. Experiment 2 showed that the phase effect on off-frequency forward masking is similar at signal frequencies of 2 and 6 kHz, provided that one equates the number of components likely to interact within an auditory filter centred on the signal, thereby roughly equating the effect of masker phase on the peakiness of that filter output. Experiment 3 showed that for some subjects, masker phase also had a strong influence on off-frequency backward masking of the signal, and that the size of this effect correlated across subjects with that observed in forward masking. We conclude that the masker phase effect is mediated mainly by cochlear non-linearities, with a possible additional effect of more central compression. The data are not consistent with a role for the efferent system.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2vZgkbD
via IFTTT

Development of Phase Locking and Frequency Representation in the Infant Frequency-Following Response

Purpose
This study investigates the development of phase locking and frequency representation in infants using the frequency-following response to consonant–vowel syllables.
Method
The frequency-following response was recorded in 56 infants and 15 young adults to 2 speech syllables (/ba/ and /ga/), which were presented in randomized order to the right ear. Signal-to-noise ratio and Fsp analyses were used to verify that individual responses were present above the noise floor. Thirty-six and 39 infants met these criteria for the /ba/ or /ga/ syllables, respectively, and 31 infants met the criteria for both syllables. Data were analyzed to obtain measures of phase-locking strength and spectral magnitudes.
Results
Phase-locking strength to the fine structure in the consonant–vowel transition was higher in young adults than in infants, but phase locking was equivalent at the fundamental frequency between infants and adults. However, frequency representation of the fundamental frequency was higher in older infants than in either the younger infants or adults.
Conclusion
Although spectral amplitudes changed during the first year of life, no changes were found with respect to phase locking to the stimulus envelope. These findings demonstrate the feasibility of obtaining these measures of phase locking and fundamental pitch strength in infants as young as 2 months of age.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0263/2652498/Development-of-Phase-Locking-and-Frequency
via IFTTT

“Whatdunit?” Sentence Comprehension Abilities of Children With SLI: Sensitivity to Word Order in Canonical and Noncanonical Structures

Purpose
With Aim 1, we compared the comprehension of and sensitivity to canonical and noncanonical word order structures in school-age children with specific language impairment (SLI) and same-age typically developing (TD) children. Aim 2 centered on the developmental improvement of sentence comprehension in the groups. With Aim 3, we compared the comprehension error patterns of the groups.
Method
Using a “Whatdunit” agent selection task, 117 children with SLI and 117 TD children (ages 7:0–11:11, years:months) propensity matched on age, gender, mother's education, and family income pointed to the picture that best represented the agent in semantically implausible canonical structures (subject–verb–object, subject relative) and noncanonical structures (passive, object relative).
Results
The SLI group performed worse than the TD group across sentence types. TD children demonstrated developmental improvement across each sentence type, but children with SLI showed improvement only for canonical sentences. Both groups chose the object noun as agent significantly more often than the noun appearing in a prepositional phrase.
Conclusions
In the absence of semantic–pragmatic cues, comprehension of canonical and noncanonical sentences by children with SLI is limited, with noncanonical sentence comprehension being disproportionately limited. The children's ability to make proper semantic role assignments to the noun arguments in sentences, especially noncanonical, is significantly hindered.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0025/2652493/Whatdunit-Sentence-Comprehension-Abilities-of
via IFTTT

Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids

Purpose
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels—in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands—in listeners with hearing impairment using hearing aids.
Method
The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity.
Results
Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation.
Conclusion
Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2016_JSLHR-H-16-0160/2635215/Visual-Cues-Contribute-Differentially-to
via IFTTT

A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

Purpose
The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean).
Method
A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph.
Results
The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate.
Conclusions
The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0121/2650812/A-CrossLanguage-Study-of-Acoustic-Predictors-of
via IFTTT

Development of Phase Locking and Frequency Representation in the Infant Frequency-Following Response

Purpose
This study investigates the development of phase locking and frequency representation in infants using the frequency-following response to consonant–vowel syllables.
Method
The frequency-following response was recorded in 56 infants and 15 young adults to 2 speech syllables (/ba/ and /ga/), which were presented in randomized order to the right ear. Signal-to-noise ratio and Fsp analyses were used to verify that individual responses were present above the noise floor. Thirty-six and 39 infants met these criteria for the /ba/ or /ga/ syllables, respectively, and 31 infants met the criteria for both syllables. Data were analyzed to obtain measures of phase-locking strength and spectral magnitudes.
Results
Phase-locking strength to the fine structure in the consonant–vowel transition was higher in young adults than in infants, but phase locking was equivalent at the fundamental frequency between infants and adults. However, frequency representation of the fundamental frequency was higher in older infants than in either the younger infants or adults.
Conclusion
Although spectral amplitudes changed during the first year of life, no changes were found with respect to phase locking to the stimulus envelope. These findings demonstrate the feasibility of obtaining these measures of phase locking and fundamental pitch strength in infants as young as 2 months of age.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0263/2652498/Development-of-Phase-Locking-and-Frequency
via IFTTT

“Whatdunit?” Sentence Comprehension Abilities of Children With SLI: Sensitivity to Word Order in Canonical and Noncanonical Structures

Purpose
With Aim 1, we compared the comprehension of and sensitivity to canonical and noncanonical word order structures in school-age children with specific language impairment (SLI) and same-age typically developing (TD) children. Aim 2 centered on the developmental improvement of sentence comprehension in the groups. With Aim 3, we compared the comprehension error patterns of the groups.
Method
Using a “Whatdunit” agent selection task, 117 children with SLI and 117 TD children (ages 7:0–11:11, years:months) propensity matched on age, gender, mother's education, and family income pointed to the picture that best represented the agent in semantically implausible canonical structures (subject–verb–object, subject relative) and noncanonical structures (passive, object relative).
Results
The SLI group performed worse than the TD group across sentence types. TD children demonstrated developmental improvement across each sentence type, but children with SLI showed improvement only for canonical sentences. Both groups chose the object noun as agent significantly more often than the noun appearing in a prepositional phrase.
Conclusions
In the absence of semantic–pragmatic cues, comprehension of canonical and noncanonical sentences by children with SLI is limited, with noncanonical sentence comprehension being disproportionately limited. The children's ability to make proper semantic role assignments to the noun arguments in sentences, especially noncanonical, is significantly hindered.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0025/2652493/Whatdunit-Sentence-Comprehension-Abilities-of
via IFTTT

Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids

Purpose
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels—in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands—in listeners with hearing impairment using hearing aids.
Method
The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity.
Results
Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation.
Conclusion
Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2016_JSLHR-H-16-0160/2635215/Visual-Cues-Contribute-Differentially-to
via IFTTT

A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

Purpose
The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean).
Method
A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph.
Results
The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate.
Conclusions
The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0121/2650812/A-CrossLanguage-Study-of-Acoustic-Predictors-of
via IFTTT

Development of Phase Locking and Frequency Representation in the Infant Frequency-Following Response

Purpose
This study investigates the development of phase locking and frequency representation in infants using the frequency-following response to consonant–vowel syllables.
Method
The frequency-following response was recorded in 56 infants and 15 young adults to 2 speech syllables (/ba/ and /ga/), which were presented in randomized order to the right ear. Signal-to-noise ratio and Fsp analyses were used to verify that individual responses were present above the noise floor. Thirty-six and 39 infants met these criteria for the /ba/ or /ga/ syllables, respectively, and 31 infants met the criteria for both syllables. Data were analyzed to obtain measures of phase-locking strength and spectral magnitudes.
Results
Phase-locking strength to the fine structure in the consonant–vowel transition was higher in young adults than in infants, but phase locking was equivalent at the fundamental frequency between infants and adults. However, frequency representation of the fundamental frequency was higher in older infants than in either the younger infants or adults.
Conclusion
Although spectral amplitudes changed during the first year of life, no changes were found with respect to phase locking to the stimulus envelope. These findings demonstrate the feasibility of obtaining these measures of phase locking and fundamental pitch strength in infants as young as 2 months of age.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-H-16-0263/2652498/Development-of-Phase-Locking-and-Frequency
via IFTTT

“Whatdunit?” Sentence Comprehension Abilities of Children With SLI: Sensitivity to Word Order in Canonical and Noncanonical Structures

Purpose
With Aim 1, we compared the comprehension of and sensitivity to canonical and noncanonical word order structures in school-age children with specific language impairment (SLI) and same-age typically developing (TD) children. Aim 2 centered on the developmental improvement of sentence comprehension in the groups. With Aim 3, we compared the comprehension error patterns of the groups.
Method
Using a “Whatdunit” agent selection task, 117 children with SLI and 117 TD children (ages 7:0–11:11, years:months) propensity matched on age, gender, mother's education, and family income pointed to the picture that best represented the agent in semantically implausible canonical structures (subject–verb–object, subject relative) and noncanonical structures (passive, object relative).
Results
The SLI group performed worse than the TD group across sentence types. TD children demonstrated developmental improvement across each sentence type, but children with SLI showed improvement only for canonical sentences. Both groups chose the object noun as agent significantly more often than the noun appearing in a prepositional phrase.
Conclusions
In the absence of semantic–pragmatic cues, comprehension of canonical and noncanonical sentences by children with SLI is limited, with noncanonical sentence comprehension being disproportionately limited. The children's ability to make proper semantic role assignments to the noun arguments in sentences, especially noncanonical, is significantly hindered.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-17-0025/2652493/Whatdunit-Sentence-Comprehension-Abilities-of
via IFTTT

Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids

Purpose
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels—in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands—in listeners with hearing impairment using hearing aids.
Method
The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity.
Results
Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation.
Conclusion
Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2016_JSLHR-H-16-0160/2635215/Visual-Cues-Contribute-Differentially-to
via IFTTT

A Cross-Language Study of Acoustic Predictors of Speech Intelligibility in Individuals With Parkinson's Disease

Purpose
The present study aimed to compare acoustic models of speech intelligibility in individuals with the same disease (Parkinson's disease [PD]) and presumably similar underlying neuropathologies but with different native languages (American English [AE] and Korean).
Method
A total of 48 speakers from the 4 speaker groups (AE speakers with PD, Korean speakers with PD, healthy English speakers, and healthy Korean speakers) were asked to read a paragraph in their native languages. Four acoustic variables were analyzed: acoustic vowel space, voice onset time contrast scores, normalized pairwise variability index, and articulation rate. Speech intelligibility scores were obtained from scaled estimates of sentences extracted from the paragraph.
Results
The findings indicated that the multiple regression models of speech intelligibility were different in Korean and AE, even with the same set of predictor variables and with speakers matched on speech intelligibility across languages. Analysis of the descriptive data for the acoustic variables showed the expected compression of the vowel space in speakers with PD in both languages, lower normalized pairwise variability index scores in Korean compared with AE, and no differences within or across language in articulation rate.
Conclusions
The results indicate that the basis of an intelligibility deficit in dysarthria is likely to depend on the native language of the speaker and listener. Additional research is required to explore other potential predictor variables, as well as additional language comparisons to pursue cross-linguistic considerations in classification and diagnosis of dysarthria types.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0121/2650812/A-CrossLanguage-Study-of-Acoustic-Predictors-of
via IFTTT

Initial Results of the Early Auditory Referral-Primary Care (EAR-PC) Study.

Related Articles

Initial Results of the Early Auditory Referral-Primary Care (EAR-PC) Study.

Am J Prev Med. 2017 Aug 18;:

Authors: Zazove P, Plegue MA, Kileny PR, McKee MM, Schleicher LS, Green LA, Sen A, Rapai ME, Guetterman TC, Mulhem E

Abstract
INTRODUCTION: Hearing loss (HL) is the second most common disability in the U.S., yet is clinically underdiagnosed. To manage its common adverse psychosocial and cognitive outcomes, early identification of HL must be improved.
METHODS: A feasibility study conducted to increase screening for HL and referral of patients aged ≥55 years arriving at two family medicine clinics. Eligible patients were asked to complete a self-administered consent form and the Hearing Handicap Inventory (HHI). Independently, clinicians received a brief educational program after which an electronic clinical prompt (intervention) alerted them (blinded to HHI results) to screen for HL during applicable patient visits. Pre- and post-intervention differences were analyzed to assess the proportion of patients referred to audiology and those diagnosed with HL (primary outcomes) and the audiology referral appropriateness (secondary outcome). Referral rates for those who screened positive for HL on the HHI were compared with those who scored negatively.
RESULTS: There were 5,520 eligible patients during the study period, of which 1,236 (22.4%) consented. After the intervention's implementation, audiology referral rates increased from 1.2% to 7.1% (p<0.001). Overall, 293 consented patients (24%) completed the HHI and scored >10, indicating probable HL. Of these 293 patients, 28.0% were referred to audiology versus only 7.4% with scores <10 (p<0.001). Forty-two of the 54 referred patients seen by audiology were diagnosed with HL (78%). Overall, the diagnosis of HL on problem lists increased from 90 of 4,815 patients (1.9%) at baseline to 163 of 5,520 patients (3.0%, p<0.001) over only 8 months.
CONCLUSIONS: The electronic clinical prompt significantly increased audiology referrals for at-risk patients for HL in two family medicine clinics. Larger-scale studies are needed to address the U.S. Preventive Services Task Force call to assess the long-term impact of HL screening in community populations.

PMID: 28826949 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2vezs2E
via IFTTT

Histopathology of the Human Inner Ear in the Cogan Syndrome with Cochlear Implantation

The Cogan syndrome is a rare disorder characterized by nonsyphilitic interstitial keratitis and audiovestibular symptoms. Profound sensorineural hearing loss has been reported in approximately half of the patients with the Cogan syndrome resulting in candidacy for cochlear implantation in some patients. The current study is the first histopathologic report on the temporal bones of a patient with the Cogan syndrome who during life underwent bilateral cochlear implantation. Preoperative MRI revealed tissue with high density in the basal turns of both cochleae and both vestibular systems consistent with fibrous tissue due to labyrinthitis. Histopathology demonstrated fibrous tissue and new bone formation within the cochlea and vestibular apparatus, worse on the right. Severe degeneration of the vestibular end organs and new bone formation in the labyrinth were seen more on the right than on the left. Although severe bilateral degeneration of the spiral ganglion neurons was seen, especially on the right, the postoperative word discrimination score was between 50 and 60% bilaterally. Impedance measures were generally higher in the right ear, possibly related to more fibrous tissue and new bone found in the scala tympani on the right side.
Audiol Neurotol 2017;22:116-123

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2wxsdHb
via IFTTT

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Related Articles

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Int J Audiol. 2017 Aug 21;:1-9

Authors: Best V, Keidser G, Freeston K, Buchholz JM

Abstract
OBJECTIVE: The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of hearing loss and amplification in older listeners.
DESIGN: Unaided and aided comprehension scores were measured for single-, two- and three-talker passages, along with unaided and aided sentence recall. To characterise the relevant cognitive abilities of the group, measures of short-term working memory, verbal information-processing speed and reading comprehension speed were collected.
STUDY SAMPLE: Participants were 41 older listeners with varying degrees of hearing loss.
RESULTS: Performance on both the NAL-DCT and the sentence test was strongly driven by hearing loss, but performance on the NAL-DCT was additionally related to a composite cognitive deficit score. Benefits of amplification were measurable but influenced by individual test SNRs.
CONCLUSIONS: The NAL-DCT is sensitive to the same factors as a traditional sentence recall test, but in addition is sensitive to the cognitive factors required for speech processing. The test shows promise as a tool for research concerned with real-world listening.

PMID: 28826285 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2wxtk9J
via IFTTT

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Related Articles

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Int J Audiol. 2017 Aug 21;:1-9

Authors: Best V, Keidser G, Freeston K, Buchholz JM

Abstract
OBJECTIVE: The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of hearing loss and amplification in older listeners.
DESIGN: Unaided and aided comprehension scores were measured for single-, two- and three-talker passages, along with unaided and aided sentence recall. To characterise the relevant cognitive abilities of the group, measures of short-term working memory, verbal information-processing speed and reading comprehension speed were collected.
STUDY SAMPLE: Participants were 41 older listeners with varying degrees of hearing loss.
RESULTS: Performance on both the NAL-DCT and the sentence test was strongly driven by hearing loss, but performance on the NAL-DCT was additionally related to a composite cognitive deficit score. Benefits of amplification were measurable but influenced by individual test SNRs.
CONCLUSIONS: The NAL-DCT is sensitive to the same factors as a traditional sentence recall test, but in addition is sensitive to the cognitive factors required for speech processing. The test shows promise as a tool for research concerned with real-world listening.

PMID: 28826285 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2wxtk9J
via IFTTT

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Related Articles

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Int J Audiol. 2017 Aug 21;:1-9

Authors: Best V, Keidser G, Freeston K, Buchholz JM

Abstract
OBJECTIVE: The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of hearing loss and amplification in older listeners.
DESIGN: Unaided and aided comprehension scores were measured for single-, two- and three-talker passages, along with unaided and aided sentence recall. To characterise the relevant cognitive abilities of the group, measures of short-term working memory, verbal information-processing speed and reading comprehension speed were collected.
STUDY SAMPLE: Participants were 41 older listeners with varying degrees of hearing loss.
RESULTS: Performance on both the NAL-DCT and the sentence test was strongly driven by hearing loss, but performance on the NAL-DCT was additionally related to a composite cognitive deficit score. Benefits of amplification were measurable but influenced by individual test SNRs.
CONCLUSIONS: The NAL-DCT is sensitive to the same factors as a traditional sentence recall test, but in addition is sensitive to the cognitive factors required for speech processing. The test shows promise as a tool for research concerned with real-world listening.

PMID: 28826285 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2wxtk9J
via IFTTT

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Related Articles

Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss.

Int J Audiol. 2017 Aug 21;:1-9

Authors: Best V, Keidser G, Freeston K, Buchholz JM

Abstract
OBJECTIVE: The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of hearing loss and amplification in older listeners.
DESIGN: Unaided and aided comprehension scores were measured for single-, two- and three-talker passages, along with unaided and aided sentence recall. To characterise the relevant cognitive abilities of the group, measures of short-term working memory, verbal information-processing speed and reading comprehension speed were collected.
STUDY SAMPLE: Participants were 41 older listeners with varying degrees of hearing loss.
RESULTS: Performance on both the NAL-DCT and the sentence test was strongly driven by hearing loss, but performance on the NAL-DCT was additionally related to a composite cognitive deficit score. Benefits of amplification were measurable but influenced by individual test SNRs.
CONCLUSIONS: The NAL-DCT is sensitive to the same factors as a traditional sentence recall test, but in addition is sensitive to the cognitive factors required for speech processing. The test shows promise as a tool for research concerned with real-world listening.

PMID: 28826285 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader http://ift.tt/2wxtk9J
via IFTTT

Tinnitus could be worsened by antidepressant use

Research suggests that selective-serotonin reuptake inhibitors - a common class of antidepressants - could lead to the exacerbation of tinnitus.

from #Audiology via ola Kala on Inoreader http://ift.tt/2xrnyCT
via IFTTT

Tinnitus could be worsened by antidepressant use

Research suggests that selective-serotonin reuptake inhibitors - a common class of antidepressants - could lead to the exacerbation of tinnitus.

from #Audiology via ola Kala on Inoreader http://ift.tt/2xrnyCT
via IFTTT

Tinnitus could be worsened by antidepressant use

Research suggests that selective-serotonin reuptake inhibitors - a common class of antidepressants - could lead to the exacerbation of tinnitus.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xrnyCT
via IFTTT

Nocebo Effect in Meniere's Disease: a Meta-analysis of Placebo-controlled Randomized Controlled Trials.

Objective: To estimate the frequency and strength of nocebo effects in trials for Meniere disease (MD). Data Sources: A literature search was conducted in PUBMED. The search terms we used were " Meniere or Meniere's," "treatment," and "placebo." Limitations included article type to be Clinical Trial or Randomized Controlled Trial, text availability to be Full text, Species to be Humans and Language to be English. Study Selection: We included placebo-controlled pharmaceutical RCTs that referred specifically to MD and recruited at least 10 adults in each arm. We excluded those studies with JADAD score

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2xby41N
via IFTTT