Παρασκευή 24 Φεβρουαρίου 2017

The Utilization of Social Media in the Hearing Aid Community

Purpose
This study investigated the utilization of social media by the hearing aid (HA) community. The purpose of this survey was to analyze the participation of HA community in the social media websites.
Method
A systematic survey of online HA-related social media sources was conducted. Such sources were identified using appropriate search terms. Social media participation was quantified on the basis of posts and “likes.”
Results
Five hundred fifty-seven social media sources were identified, including 174 Twitter accounts, 172 YouTube videos, 91 Facebook pages, 20 Facebook groups, 71 blogs, and 29 forums. Twitter and YouTube platforms showed the highest level of activity among social media users. The HA-related community used social media sources for advice and support, information sharing, and service-related information.
Conclusions
HA users, other individuals, and organizations interested in HAs leave their digital footprint on a wide variety of social media sources. The community connects, offers support, and shares information on a variety of HA-related issues. The HA community is as active in social media utilization as other groups, such as the cochlear implant community, even though the patterns of their social media use are different because of their unique needs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2lEPXT8
via IFTTT

The Utilization of Social Media in the Hearing Aid Community

Purpose
This study investigated the utilization of social media by the hearing aid (HA) community. The purpose of this survey was to analyze the participation of HA community in the social media websites.
Method
A systematic survey of online HA-related social media sources was conducted. Such sources were identified using appropriate search terms. Social media participation was quantified on the basis of posts and “likes.”
Results
Five hundred fifty-seven social media sources were identified, including 174 Twitter accounts, 172 YouTube videos, 91 Facebook pages, 20 Facebook groups, 71 blogs, and 29 forums. Twitter and YouTube platforms showed the highest level of activity among social media users. The HA-related community used social media sources for advice and support, information sharing, and service-related information.
Conclusions
HA users, other individuals, and organizations interested in HAs leave their digital footprint on a wide variety of social media sources. The community connects, offers support, and shares information on a variety of HA-related issues. The HA community is as active in social media utilization as other groups, such as the cochlear implant community, even though the patterns of their social media use are different because of their unique needs.

from #Audiology via ola Kala on Inoreader http://ift.tt/2lEPXT8
via IFTTT

The Utilization of Social Media in the Hearing Aid Community

Purpose
This study investigated the utilization of social media by the hearing aid (HA) community. The purpose of this survey was to analyze the participation of HA community in the social media websites.
Method
A systematic survey of online HA-related social media sources was conducted. Such sources were identified using appropriate search terms. Social media participation was quantified on the basis of posts and “likes.”
Results
Five hundred fifty-seven social media sources were identified, including 174 Twitter accounts, 172 YouTube videos, 91 Facebook pages, 20 Facebook groups, 71 blogs, and 29 forums. Twitter and YouTube platforms showed the highest level of activity among social media users. The HA-related community used social media sources for advice and support, information sharing, and service-related information.
Conclusions
HA users, other individuals, and organizations interested in HAs leave their digital footprint on a wide variety of social media sources. The community connects, offers support, and shares information on a variety of HA-related issues. The HA community is as active in social media utilization as other groups, such as the cochlear implant community, even though the patterns of their social media use are different because of their unique needs.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2lEPXT8
via IFTTT

On the relation between pitch and level

S03785955.gif

Publication date: Available online 24 February 2017
Source:Hearing Research
Author(s): Yi Zheng, Romain Brette
Pitch is the perceptual dimension along which musical notes are ordered from low to high. It is often described as the perceptual correlate of the periodicity of the sound's waveform. Previous reports have shown that pitch can depend slightly on sound level. We wanted to verify that these observations reflect genuine changes in perceived pitch, and were not due to procedural factors or confusion between dimensions of pitch and level. We first conducted a systematic pitch matching task and confirmed that the pitch of low frequency pure tones, but not complex tones, decreases by an amount equivalent to a change in frequency of more than half a semitone when level increases. We then showed that the structure of pitch shifts is anti-symmetric and transitive, as expected for changes in pitch. We also observed shifts in the same direction (although smaller) in an interval matching task. Finally, we observed that musicians are more precise in pitch matching tasks than non-musicians but show the same average shifts with level. These combined experiments confirm that the pitch of low frequency pure tones depends weakly but systematically on level. These observations pose a challenge to current theories of pitch.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2lEH5Lx
via IFTTT

On the relation between pitch and level

S03785955.gif

Publication date: Available online 24 February 2017
Source:Hearing Research
Author(s): Yi Zheng, Romain Brette
Pitch is the perceptual dimension along which musical notes are ordered from low to high. It is often described as the perceptual correlate of the periodicity of the sound's waveform. Previous reports have shown that pitch can depend slightly on sound level. We wanted to verify that these observations reflect genuine changes in perceived pitch, and were not due to procedural factors or confusion between dimensions of pitch and level. We first conducted a systematic pitch matching task and confirmed that the pitch of low frequency pure tones, but not complex tones, decreases by an amount equivalent to a change in frequency of more than half a semitone when level increases. We then showed that the structure of pitch shifts is anti-symmetric and transitive, as expected for changes in pitch. We also observed shifts in the same direction (although smaller) in an interval matching task. Finally, we observed that musicians are more precise in pitch matching tasks than non-musicians but show the same average shifts with level. These combined experiments confirm that the pitch of low frequency pure tones depends weakly but systematically on level. These observations pose a challenge to current theories of pitch.



from #Audiology via ola Kala on Inoreader http://ift.tt/2lEH5Lx
via IFTTT

On the relation between pitch and level

S03785955.gif

Publication date: Available online 24 February 2017
Source:Hearing Research
Author(s): Yi Zheng, Romain Brette
Pitch is the perceptual dimension along which musical notes are ordered from low to high. It is often described as the perceptual correlate of the periodicity of the sound's waveform. Previous reports have shown that pitch can depend slightly on sound level. We wanted to verify that these observations reflect genuine changes in perceived pitch, and were not due to procedural factors or confusion between dimensions of pitch and level. We first conducted a systematic pitch matching task and confirmed that the pitch of low frequency pure tones, but not complex tones, decreases by an amount equivalent to a change in frequency of more than half a semitone when level increases. We then showed that the structure of pitch shifts is anti-symmetric and transitive, as expected for changes in pitch. We also observed shifts in the same direction (although smaller) in an interval matching task. Finally, we observed that musicians are more precise in pitch matching tasks than non-musicians but show the same average shifts with level. These combined experiments confirm that the pitch of low frequency pure tones depends weakly but systematically on level. These observations pose a challenge to current theories of pitch.



from #Audiology via ola Kala on Inoreader http://ift.tt/2lEH5Lx
via IFTTT

Laryngeal Aerodynamics in Healthy Older Adults and Adults With Parkinson's Disease

Purpose
The present study compared laryngeal aerodynamic function of healthy older adults (HOA) to adults with Parkinson's disease (PD) while speaking at a comfortable and increased vocal intensity.
Method
Laryngeal aerodynamic measures (subglottal pressure, peak-to-peak flow, minimum flow, and open quotient [OQ]) were compared between HOAs and individuals with PD who had a diagnosis of hypophonia. Increased vocal intensity was elicited via monaurally presented multitalker background noise.
Results
At a comfortable speaking intensity, HOAs and individuals with PD produced comparable vocal intensity, rates of vocal fold closure, and minimum flow. HOAs used smaller OQs, higher subglottal pressure, and lower peak-to-peak flow than individuals with PD. Both groups increased speaking intensity when speaking in noise to the same degree. However, HOAs produced increased intensity with greater driving pressure, faster vocal fold closure rates, and smaller OQs than individuals with PD.
Conclusions
Monaural background noise elicited equivalent vocal intensity increases in HOAs and individuals with PD. Although both groups used laryngeal mechanisms as expected to increase sound pressure level, they used these mechanisms to different degrees. The HOAs appeared to have better control of the laryngeal mechanism to make changes to their vocal intensity.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mu6iHH
via IFTTT

Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning

Purpose
This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information.
Method
Fifty children, with a mean age of 8 years (range 5–12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age.
Results
Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall.
Conclusions
These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mgi3FG
via IFTTT

Compensatory Strategies in the Developmental Patterns of English /s/: Gender and Vowel Context Effects

Purpose
The developmental trajectory of English /s/ was investigated to determine the extent to which children's speech productions are acoustically fine-grained. Given the hypothesis that young children have adultlike phonetic knowledge of /s/, the following were examined: (a) whether this knowledge manifests itself in acoustic spectra that match the gender-specific patterns of adults, (b) whether vowel context affects the spectra of /s/ in adults and children similarly, and (c) whether children adopt compensatory production strategies to match adult acoustic targets.
Method
Several acoustic variables were measured from word-initial /s/ (and /t/) and the following vowel in the productions of children aged 2 to 5 years and adult controls using 2 sets of corpora from the Paidologos database.
Results
Gender-specific patterns in the spectral distribution of /s/ were found. Acoustically, more canonical /s/ was produced before vowels with higher F1 (i.e., lower vowels) in children, a context where lingual articulation is challenging. Measures of breathiness and vowel intrinsic F0 provide evidence that children use a compensatory aerodynamic mechanism to achieve their acoustic targets in articulatorily challenging contexts.
Conclusion
Together, these results provide evidence that children's phonetic knowledge is acoustically detailed and gender specified and that speech production goals are acoustically oriented at early stages of speech development.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mu0bU1
via IFTTT

Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment

Purpose
Lipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).
Method
Forty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.
Results
Children with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in children with SLI. Lipreading was also found to correlate with several cognitive skills, for example, short-term memory capacity and verbal motor skills.
Conclusions
Speech processing deficits in SLI extend also to the perception of visual speech. Lipreading performance was associated with phonological skills. Poor lipreading in children with SLI may be, thus, related to problems in phonological processing.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mgeIqq
via IFTTT

Prosody and Spoken Word Recognition in Early and Late Spanish–English Bilingual Individuals

Purpose
This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals.
Method
Word recognition was assessed in monolingual and bilingual participants when English words were presented with English and Spanish accents in 3 gating conditions: onset only, onset plus prosody/word length only, and onset plus prosody. Word properties were quantified to assess their influence on word recognition in the onset-only condition.
Results
Word recognition speed was proportional to language experience. In the onset-only condition, only word frequency facilitated word recognition across groups. Addition of duration information or prosodic word form did not facilitate word recognition in bilingual individuals the way it did in monolingual individuals. For the bilingual groups, Spanish accent significantly facilitated recognition in the presence of prosodic information. Word attributes were far more consequential in the English accent than in the Spanish accent condition.
Conclusions
Word rhyme information, word properties, and accent affect gated word recognition differently in monolingual and bilingual individuals. Top-down strategies emanating from word properties that may facilitate single-word recognition are experience and context dependent and become less available in the presence of a nonnative accent.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mtWAFo
via IFTTT

Literacy Outcomes for Primary School Children Who Are Deaf and Hard of Hearing: A Cohort Comparison Study

Purpose
In this study, we compared the language and literacy of two cohorts of children with severe–profound hearing loss, recruited 10 years apart, to determine if outcomes had improved in line with the introduction of newborn hearing screening and access to improved hearing aid technology.
Method
Forty-two children with deafness, aged 5–7 years with a mean unaided loss of 102 DB, were assessed on language, reading, and phonological skills. Their performance was compared with that of a similar group of 32 children with deafness assessed 10 years earlier and also a group of 40 children with normal hearing of similar single word reading ability.
Results
English vocabulary was significantly higher in the new cohort although it was still below chronological age. Phonological awareness and reading ability had not significantly changed over time. In both cohorts, English vocabulary predicted reading, but phonological awareness was only a significant predictor for the new cohort.
Conclusions
The current results show that vocabulary knowledge of children with severe–profound hearing loss has improved over time, but there has not been a commensurate improvement in phonological skills or reading. They suggest that children with severe–profound hearing loss will require continued support to develop robust phonological coding skills to underpin reading.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mgcR4G
via IFTTT

Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech

Purpose
The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria.
Method
Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words.
Results
Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement.
Conclusions
Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mtQxR9
via IFTTT

Laryngeal Aerodynamics in Healthy Older Adults and Adults With Parkinson's Disease

Purpose
The present study compared laryngeal aerodynamic function of healthy older adults (HOA) to adults with Parkinson's disease (PD) while speaking at a comfortable and increased vocal intensity.
Method
Laryngeal aerodynamic measures (subglottal pressure, peak-to-peak flow, minimum flow, and open quotient [OQ]) were compared between HOAs and individuals with PD who had a diagnosis of hypophonia. Increased vocal intensity was elicited via monaurally presented multitalker background noise.
Results
At a comfortable speaking intensity, HOAs and individuals with PD produced comparable vocal intensity, rates of vocal fold closure, and minimum flow. HOAs used smaller OQs, higher subglottal pressure, and lower peak-to-peak flow than individuals with PD. Both groups increased speaking intensity when speaking in noise to the same degree. However, HOAs produced increased intensity with greater driving pressure, faster vocal fold closure rates, and smaller OQs than individuals with PD.
Conclusions
Monaural background noise elicited equivalent vocal intensity increases in HOAs and individuals with PD. Although both groups used laryngeal mechanisms as expected to increase sound pressure level, they used these mechanisms to different degrees. The HOAs appeared to have better control of the laryngeal mechanism to make changes to their vocal intensity.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mu6iHH
via IFTTT

Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning

Purpose
This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information.
Method
Fifty children, with a mean age of 8 years (range 5–12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age.
Results
Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall.
Conclusions
These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mgi3FG
via IFTTT

Compensatory Strategies in the Developmental Patterns of English /s/: Gender and Vowel Context Effects

Purpose
The developmental trajectory of English /s/ was investigated to determine the extent to which children's speech productions are acoustically fine-grained. Given the hypothesis that young children have adultlike phonetic knowledge of /s/, the following were examined: (a) whether this knowledge manifests itself in acoustic spectra that match the gender-specific patterns of adults, (b) whether vowel context affects the spectra of /s/ in adults and children similarly, and (c) whether children adopt compensatory production strategies to match adult acoustic targets.
Method
Several acoustic variables were measured from word-initial /s/ (and /t/) and the following vowel in the productions of children aged 2 to 5 years and adult controls using 2 sets of corpora from the Paidologos database.
Results
Gender-specific patterns in the spectral distribution of /s/ were found. Acoustically, more canonical /s/ was produced before vowels with higher F1 (i.e., lower vowels) in children, a context where lingual articulation is challenging. Measures of breathiness and vowel intrinsic F0 provide evidence that children use a compensatory aerodynamic mechanism to achieve their acoustic targets in articulatorily challenging contexts.
Conclusion
Together, these results provide evidence that children's phonetic knowledge is acoustically detailed and gender specified and that speech production goals are acoustically oriented at early stages of speech development.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mu0bU1
via IFTTT

Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment

Purpose
Lipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).
Method
Forty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.
Results
Children with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in children with SLI. Lipreading was also found to correlate with several cognitive skills, for example, short-term memory capacity and verbal motor skills.
Conclusions
Speech processing deficits in SLI extend also to the perception of visual speech. Lipreading performance was associated with phonological skills. Poor lipreading in children with SLI may be, thus, related to problems in phonological processing.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mgeIqq
via IFTTT

Prosody and Spoken Word Recognition in Early and Late Spanish–English Bilingual Individuals

Purpose
This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals.
Method
Word recognition was assessed in monolingual and bilingual participants when English words were presented with English and Spanish accents in 3 gating conditions: onset only, onset plus prosody/word length only, and onset plus prosody. Word properties were quantified to assess their influence on word recognition in the onset-only condition.
Results
Word recognition speed was proportional to language experience. In the onset-only condition, only word frequency facilitated word recognition across groups. Addition of duration information or prosodic word form did not facilitate word recognition in bilingual individuals the way it did in monolingual individuals. For the bilingual groups, Spanish accent significantly facilitated recognition in the presence of prosodic information. Word attributes were far more consequential in the English accent than in the Spanish accent condition.
Conclusions
Word rhyme information, word properties, and accent affect gated word recognition differently in monolingual and bilingual individuals. Top-down strategies emanating from word properties that may facilitate single-word recognition are experience and context dependent and become less available in the presence of a nonnative accent.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mtWAFo
via IFTTT

Literacy Outcomes for Primary School Children Who Are Deaf and Hard of Hearing: A Cohort Comparison Study

Purpose
In this study, we compared the language and literacy of two cohorts of children with severe–profound hearing loss, recruited 10 years apart, to determine if outcomes had improved in line with the introduction of newborn hearing screening and access to improved hearing aid technology.
Method
Forty-two children with deafness, aged 5–7 years with a mean unaided loss of 102 DB, were assessed on language, reading, and phonological skills. Their performance was compared with that of a similar group of 32 children with deafness assessed 10 years earlier and also a group of 40 children with normal hearing of similar single word reading ability.
Results
English vocabulary was significantly higher in the new cohort although it was still below chronological age. Phonological awareness and reading ability had not significantly changed over time. In both cohorts, English vocabulary predicted reading, but phonological awareness was only a significant predictor for the new cohort.
Conclusions
The current results show that vocabulary knowledge of children with severe–profound hearing loss has improved over time, but there has not been a commensurate improvement in phonological skills or reading. They suggest that children with severe–profound hearing loss will require continued support to develop robust phonological coding skills to underpin reading.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mgcR4G
via IFTTT

Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech

Purpose
The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria.
Method
Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words.
Results
Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement.
Conclusions
Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mtQxR9
via IFTTT

Laryngeal Aerodynamics in Healthy Older Adults and Adults With Parkinson's Disease

Purpose
The present study compared laryngeal aerodynamic function of healthy older adults (HOA) to adults with Parkinson's disease (PD) while speaking at a comfortable and increased vocal intensity.
Method
Laryngeal aerodynamic measures (subglottal pressure, peak-to-peak flow, minimum flow, and open quotient [OQ]) were compared between HOAs and individuals with PD who had a diagnosis of hypophonia. Increased vocal intensity was elicited via monaurally presented multitalker background noise.
Results
At a comfortable speaking intensity, HOAs and individuals with PD produced comparable vocal intensity, rates of vocal fold closure, and minimum flow. HOAs used smaller OQs, higher subglottal pressure, and lower peak-to-peak flow than individuals with PD. Both groups increased speaking intensity when speaking in noise to the same degree. However, HOAs produced increased intensity with greater driving pressure, faster vocal fold closure rates, and smaller OQs than individuals with PD.
Conclusions
Monaural background noise elicited equivalent vocal intensity increases in HOAs and individuals with PD. Although both groups used laryngeal mechanisms as expected to increase sound pressure level, they used these mechanisms to different degrees. The HOAs appeared to have better control of the laryngeal mechanism to make changes to their vocal intensity.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mu6iHH
via IFTTT

Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning

Purpose
This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information.
Method
Fifty children, with a mean age of 8 years (range 5–12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age.
Results
Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall.
Conclusions
These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mgi3FG
via IFTTT

Compensatory Strategies in the Developmental Patterns of English /s/: Gender and Vowel Context Effects

Purpose
The developmental trajectory of English /s/ was investigated to determine the extent to which children's speech productions are acoustically fine-grained. Given the hypothesis that young children have adultlike phonetic knowledge of /s/, the following were examined: (a) whether this knowledge manifests itself in acoustic spectra that match the gender-specific patterns of adults, (b) whether vowel context affects the spectra of /s/ in adults and children similarly, and (c) whether children adopt compensatory production strategies to match adult acoustic targets.
Method
Several acoustic variables were measured from word-initial /s/ (and /t/) and the following vowel in the productions of children aged 2 to 5 years and adult controls using 2 sets of corpora from the Paidologos database.
Results
Gender-specific patterns in the spectral distribution of /s/ were found. Acoustically, more canonical /s/ was produced before vowels with higher F1 (i.e., lower vowels) in children, a context where lingual articulation is challenging. Measures of breathiness and vowel intrinsic F0 provide evidence that children use a compensatory aerodynamic mechanism to achieve their acoustic targets in articulatorily challenging contexts.
Conclusion
Together, these results provide evidence that children's phonetic knowledge is acoustically detailed and gender specified and that speech production goals are acoustically oriented at early stages of speech development.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mu0bU1
via IFTTT

Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment

Purpose
Lipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).
Method
Forty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.
Results
Children with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in children with SLI. Lipreading was also found to correlate with several cognitive skills, for example, short-term memory capacity and verbal motor skills.
Conclusions
Speech processing deficits in SLI extend also to the perception of visual speech. Lipreading performance was associated with phonological skills. Poor lipreading in children with SLI may be, thus, related to problems in phonological processing.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mgeIqq
via IFTTT

Prosody and Spoken Word Recognition in Early and Late Spanish–English Bilingual Individuals

Purpose
This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals.
Method
Word recognition was assessed in monolingual and bilingual participants when English words were presented with English and Spanish accents in 3 gating conditions: onset only, onset plus prosody/word length only, and onset plus prosody. Word properties were quantified to assess their influence on word recognition in the onset-only condition.
Results
Word recognition speed was proportional to language experience. In the onset-only condition, only word frequency facilitated word recognition across groups. Addition of duration information or prosodic word form did not facilitate word recognition in bilingual individuals the way it did in monolingual individuals. For the bilingual groups, Spanish accent significantly facilitated recognition in the presence of prosodic information. Word attributes were far more consequential in the English accent than in the Spanish accent condition.
Conclusions
Word rhyme information, word properties, and accent affect gated word recognition differently in monolingual and bilingual individuals. Top-down strategies emanating from word properties that may facilitate single-word recognition are experience and context dependent and become less available in the presence of a nonnative accent.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mtWAFo
via IFTTT

Literacy Outcomes for Primary School Children Who Are Deaf and Hard of Hearing: A Cohort Comparison Study

Purpose
In this study, we compared the language and literacy of two cohorts of children with severe–profound hearing loss, recruited 10 years apart, to determine if outcomes had improved in line with the introduction of newborn hearing screening and access to improved hearing aid technology.
Method
Forty-two children with deafness, aged 5–7 years with a mean unaided loss of 102 DB, were assessed on language, reading, and phonological skills. Their performance was compared with that of a similar group of 32 children with deafness assessed 10 years earlier and also a group of 40 children with normal hearing of similar single word reading ability.
Results
English vocabulary was significantly higher in the new cohort although it was still below chronological age. Phonological awareness and reading ability had not significantly changed over time. In both cohorts, English vocabulary predicted reading, but phonological awareness was only a significant predictor for the new cohort.
Conclusions
The current results show that vocabulary knowledge of children with severe–profound hearing loss has improved over time, but there has not been a commensurate improvement in phonological skills or reading. They suggest that children with severe–profound hearing loss will require continued support to develop robust phonological coding skills to underpin reading.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mgcR4G
via IFTTT

Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech

Purpose
The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria.
Method
Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words.
Results
Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement.
Conclusions
Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.

from #Audiology via ola Kala on Inoreader http://ift.tt/2mtQxR9
via IFTTT

Attention is Associated with Postural Control in Those with Chronic Ankle Stability

Publication date: Available online 24 February 2017
Source:Gait & Posture
Author(s): Adam B. Rosen, Nicholas T. Than, William Z. Smith, Jennifer M. Yentes, Melanie L. McGrath, Mukul Mukherjee, Sara A. Myers, Arthur C. Maerlender
Chronic ankle instability (CAI) is often debilitating and may be affected by a number of intrinsic and environmental factors. Alterations in neurocognitive function and attention may contribute to repetitive injury in those with CAI and influence postural control strategies. Thus, the purpose of this study was to determine if there was a difference in attentional functioning and static postural control among groups of Comparison, Coper and CAI participants and assess the relationship between them within each of the groups. Recruited participants performed single-limb balance trials and completed the CNS Vital Signs (CNSVS) computer-based assessment to assess their attentional function. Center of pressure (COP) velocity (COPv) and maximum range (COPr), in both the anteroposterior (AP) and mediolateral (ML) directions were calculated from force plate data. Simple attention (SA), which measures self-regulation and attention control was extracted from the CNSVS. Data from 45 participants (15 in each group, 27=female, 18=male) was analyzed for this study. No significant differences were observed between attention or COP variables among each of the groups. However, significant relationships were present between attention and COP variables within the CAI group. CAI participants displayed significant moderate to large correlations between SA and AP COPr (r=−0.59, p=0.010), AP COPv (r=−0.48, p=0.038) and ML COPr (r=−0.47, p=0.034). The results suggest a linear relationship of stability and attention in the CAI group. Attentional self-regulation may moderate how those with CAI control postural stability. Incorporating neurocognitive training focused on attentional control may improve outcomes in those with CAI.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2lzNglX
via IFTTT

Trunk sway in idiopathic normal pressure hydrocephalus—Quantitative assessment in clinical practice

Publication date: Available online 24 February 2017
Source:Gait & Posture
Author(s): Tomas Bäcklund, Jennifer Frankel, Hanna Israelsson, Jan Malm, Nina Sundström
BackgroundIn diagnosis and treatment of patients with idiopathic normal pressure hydrocephalus (iNPH), there is need for clinically applicable, quantitative assessment of balance and gait. Using a body-worn gyroscopic system, the aim of this study was to assess postural stability of iNPH patients in standing, walking and during sensory deprivation before and after cerebrospinal fluid (CSF) drainage and surgery. A comparison was performed between healthy elderly (HE) and patients with various types of hydrocephalus (ventriculomegaly (VM)).MethodsTrunk sway was measured in 31 iNPH patients, 22 VM patients and 58 HE. Measurements were performed at baseline in all subjects, after CSF drainage in both patient groups and after shunt surgery in the iNPH group.ResultsPreoperatively, the iNPH patients had significantly higher trunk sway compared to HE, specifically for the standing tasks (p <0.001). Compared to VM, iNPH patients had significantly lower sway velocity during gait in three of four cases on firm support (p <0.05). Sway velocity improved after CSF drainage and in forward-backward direction after surgery (p<0.01). Compared to HE both patient groups demonstrated less reliance on visual input to maintain stable posture.ConclusionsINPH patients had reduced postural stability compared to HE, particularly during standing, and for differentiation between iNPH and VM patients sway velocity during gait is a promising parameter. A reversible reduction of visual incorporation during standing was also seen. Thus, the gyroscopic system quantitatively assessed postural deficits in iNPH, making it a potentially useful tool for aiding in future diagnoses, choices of treatment and clinical follow-up.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2lh9ExL
via IFTTT

Attention is Associated with Postural Control in Those with Chronic Ankle Stability

Publication date: Available online 24 February 2017
Source:Gait & Posture
Author(s): Adam B. Rosen, Nicholas T. Than, William Z. Smith, Jennifer M. Yentes, Melanie L. McGrath, Mukul Mukherjee, Sara A. Myers, Arthur C. Maerlender
Chronic ankle instability (CAI) is often debilitating and may be affected by a number of intrinsic and environmental factors. Alterations in neurocognitive function and attention may contribute to repetitive injury in those with CAI and influence postural control strategies. Thus, the purpose of this study was to determine if there was a difference in attentional functioning and static postural control among groups of Comparison, Coper and CAI participants and assess the relationship between them within each of the groups. Recruited participants performed single-limb balance trials and completed the CNS Vital Signs (CNSVS) computer-based assessment to assess their attentional function. Center of pressure (COP) velocity (COPv) and maximum range (COPr), in both the anteroposterior (AP) and mediolateral (ML) directions were calculated from force plate data. Simple attention (SA), which measures self-regulation and attention control was extracted from the CNSVS. Data from 45 participants (15 in each group, 27=female, 18=male) was analyzed for this study. No significant differences were observed between attention or COP variables among each of the groups. However, significant relationships were present between attention and COP variables within the CAI group. CAI participants displayed significant moderate to large correlations between SA and AP COPr (r=−0.59, p=0.010), AP COPv (r=−0.48, p=0.038) and ML COPr (r=−0.47, p=0.034). The results suggest a linear relationship of stability and attention in the CAI group. Attentional self-regulation may moderate how those with CAI control postural stability. Incorporating neurocognitive training focused on attentional control may improve outcomes in those with CAI.



from #Audiology via ola Kala on Inoreader http://ift.tt/2lzNglX
via IFTTT

Trunk sway in idiopathic normal pressure hydrocephalus—Quantitative assessment in clinical practice

Publication date: Available online 24 February 2017
Source:Gait & Posture
Author(s): Tomas Bäcklund, Jennifer Frankel, Hanna Israelsson, Jan Malm, Nina Sundström
BackgroundIn diagnosis and treatment of patients with idiopathic normal pressure hydrocephalus (iNPH), there is need for clinically applicable, quantitative assessment of balance and gait. Using a body-worn gyroscopic system, the aim of this study was to assess postural stability of iNPH patients in standing, walking and during sensory deprivation before and after cerebrospinal fluid (CSF) drainage and surgery. A comparison was performed between healthy elderly (HE) and patients with various types of hydrocephalus (ventriculomegaly (VM)).MethodsTrunk sway was measured in 31 iNPH patients, 22 VM patients and 58 HE. Measurements were performed at baseline in all subjects, after CSF drainage in both patient groups and after shunt surgery in the iNPH group.ResultsPreoperatively, the iNPH patients had significantly higher trunk sway compared to HE, specifically for the standing tasks (p <0.001). Compared to VM, iNPH patients had significantly lower sway velocity during gait in three of four cases on firm support (p <0.05). Sway velocity improved after CSF drainage and in forward-backward direction after surgery (p<0.01). Compared to HE both patient groups demonstrated less reliance on visual input to maintain stable posture.ConclusionsINPH patients had reduced postural stability compared to HE, particularly during standing, and for differentiation between iNPH and VM patients sway velocity during gait is a promising parameter. A reversible reduction of visual incorporation during standing was also seen. Thus, the gyroscopic system quantitatively assessed postural deficits in iNPH, making it a potentially useful tool for aiding in future diagnoses, choices of treatment and clinical follow-up.



from #Audiology via ola Kala on Inoreader http://ift.tt/2lh9ExL
via IFTTT

Attention is Associated with Postural Control in Those with Chronic Ankle Stability

Publication date: Available online 24 February 2017
Source:Gait & Posture
Author(s): Adam B. Rosen, Nicholas T. Than, William Z. Smith, Jennifer M. Yentes, Melanie L. McGrath, Mukul Mukherjee, Sara A. Myers, Arthur C. Maerlender
Chronic ankle instability (CAI) is often debilitating and may be affected by a number of intrinsic and environmental factors. Alterations in neurocognitive function and attention may contribute to repetitive injury in those with CAI and influence postural control strategies. Thus, the purpose of this study was to determine if there was a difference in attentional functioning and static postural control among groups of Comparison, Coper and CAI participants and assess the relationship between them within each of the groups. Recruited participants performed single-limb balance trials and completed the CNS Vital Signs (CNSVS) computer-based assessment to assess their attentional function. Center of pressure (COP) velocity (COPv) and maximum range (COPr), in both the anteroposterior (AP) and mediolateral (ML) directions were calculated from force plate data. Simple attention (SA), which measures self-regulation and attention control was extracted from the CNSVS. Data from 45 participants (15 in each group, 27=female, 18=male) was analyzed for this study. No significant differences were observed between attention or COP variables among each of the groups. However, significant relationships were present between attention and COP variables within the CAI group. CAI participants displayed significant moderate to large correlations between SA and AP COPr (r=−0.59, p=0.010), AP COPv (r=−0.48, p=0.038) and ML COPr (r=−0.47, p=0.034). The results suggest a linear relationship of stability and attention in the CAI group. Attentional self-regulation may moderate how those with CAI control postural stability. Incorporating neurocognitive training focused on attentional control may improve outcomes in those with CAI.



from #Audiology via ola Kala on Inoreader http://ift.tt/2lzNglX
via IFTTT

Trunk sway in idiopathic normal pressure hydrocephalus—Quantitative assessment in clinical practice

Publication date: Available online 24 February 2017
Source:Gait & Posture
Author(s): Tomas Bäcklund, Jennifer Frankel, Hanna Israelsson, Jan Malm, Nina Sundström
BackgroundIn diagnosis and treatment of patients with idiopathic normal pressure hydrocephalus (iNPH), there is need for clinically applicable, quantitative assessment of balance and gait. Using a body-worn gyroscopic system, the aim of this study was to assess postural stability of iNPH patients in standing, walking and during sensory deprivation before and after cerebrospinal fluid (CSF) drainage and surgery. A comparison was performed between healthy elderly (HE) and patients with various types of hydrocephalus (ventriculomegaly (VM)).MethodsTrunk sway was measured in 31 iNPH patients, 22 VM patients and 58 HE. Measurements were performed at baseline in all subjects, after CSF drainage in both patient groups and after shunt surgery in the iNPH group.ResultsPreoperatively, the iNPH patients had significantly higher trunk sway compared to HE, specifically for the standing tasks (p <0.001). Compared to VM, iNPH patients had significantly lower sway velocity during gait in three of four cases on firm support (p <0.05). Sway velocity improved after CSF drainage and in forward-backward direction after surgery (p<0.01). Compared to HE both patient groups demonstrated less reliance on visual input to maintain stable posture.ConclusionsINPH patients had reduced postural stability compared to HE, particularly during standing, and for differentiation between iNPH and VM patients sway velocity during gait is a promising parameter. A reversible reduction of visual incorporation during standing was also seen. Thus, the gyroscopic system quantitatively assessed postural deficits in iNPH, making it a potentially useful tool for aiding in future diagnoses, choices of treatment and clinical follow-up.



from #Audiology via ola Kala on Inoreader http://ift.tt/2lh9ExL
via IFTTT

Hearing aid technology: model-based concepts and assessment

.


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mfeZtx
via IFTTT

Promoting global action on hearing loss: World Hearing Day

.


from #Audiology via xlomafota13 on Inoreader http://ift.tt/2msFgR7
via IFTTT

Hearing aid technology: model-based concepts and assessment

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2mfeZtx
via IFTTT

Promoting global action on hearing loss: World Hearing Day

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2msFgR7
via IFTTT

Hearing aid technology: model-based concepts and assessment

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2mfeZtx
via IFTTT

Promoting global action on hearing loss: World Hearing Day

.


from #Audiology via ola Kala on Inoreader http://ift.tt/2msFgR7
via IFTTT

Two microphones spectral-coherence based speech enhancement for hearing aids using smartphone as an assistive device.

Related Articles

Two microphones spectral-coherence based speech enhancement for hearing aids using smartphone as an assistive device.

Conf Proc IEEE Eng Med Biol Soc. 2016 Aug;2016:3670-3673

Authors: Reddy CK, Hao Y, Panahi I, Reddy CK, Yiya Hao, Panahi I, Panahi I, Reddy CK, Hao Y

Abstract
In this paper, we present a new Speech Enhancement (SE) technique capable of running on a smartphone, as an assistive device for hearing aids (HAs). The developed method incorporates the coherence between the speech and noise signals to obtain a SE gain function which is used in conjunction with the gain function obtained by Spectral Subtraction using adaptive gain averaging. SE using coherence based gain function is found to suppress the background noise well, while inducing speech distortion. On the other hand, SE using Spectral Subtraction improves speech quality with tolerable speech distortion, but introduces background musical noise for certain noise types. The weighted fusion of the two gain functions strikes a balance between noise suppression and speech distortion. Also it allows the user to control the weighting factor based on the noisy environment and their comfort level of hearing. The developed method is computationally fast and operates in real-time. The proposed method was evaluated for machinery, babble, and car noise types, using both objective and subjective measures for both quality and intelligibility of the enhanced speech. The results show significant improvements in comparison with stand-alone Spectral Subtraction with weighted gain averaging SE methods.

PMID: 28227311 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2mkfHm6
via IFTTT

Two microphones spectral-coherence based speech enhancement for hearing aids using smartphone as an assistive device.

Related Articles

Two microphones spectral-coherence based speech enhancement for hearing aids using smartphone as an assistive device.

Conf Proc IEEE Eng Med Biol Soc. 2016 Aug;2016:3670-3673

Authors: Reddy CK, Hao Y, Panahi I, Reddy CK, Yiya Hao, Panahi I, Panahi I, Reddy CK, Hao Y

Abstract
In this paper, we present a new Speech Enhancement (SE) technique capable of running on a smartphone, as an assistive device for hearing aids (HAs). The developed method incorporates the coherence between the speech and noise signals to obtain a SE gain function which is used in conjunction with the gain function obtained by Spectral Subtraction using adaptive gain averaging. SE using coherence based gain function is found to suppress the background noise well, while inducing speech distortion. On the other hand, SE using Spectral Subtraction improves speech quality with tolerable speech distortion, but introduces background musical noise for certain noise types. The weighted fusion of the two gain functions strikes a balance between noise suppression and speech distortion. Also it allows the user to control the weighting factor based on the noisy environment and their comfort level of hearing. The developed method is computationally fast and operates in real-time. The proposed method was evaluated for machinery, babble, and car noise types, using both objective and subjective measures for both quality and intelligibility of the enhanced speech. The results show significant improvements in comparison with stand-alone Spectral Subtraction with weighted gain averaging SE methods.

PMID: 28227311 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2mkfHm6
via IFTTT

Age-related hearing loss and dementia: a 10-year national population-based study.

Related Articles

Age-related hearing loss and dementia: a 10-year national population-based study.

Eur Arch Otorhinolaryngol. 2017 Feb 22;:

Authors: Su P, Hsu CC, Lin HC, Huang WS, Yang TL, Hsu WT, Lin CL, Hsu CY, Chang KH, Hsu YC

Abstract
Age-related hearing loss (ARHL) is postulated to affect dementia. Our study aims to investigate the relationship between ARHL and the prevalence, and 10-year incidence of dementia in the Taiwan National Health Insurance Research Database (NHIRD). We selected patients diagnosed with ARHL from the NHIRD. A comparison cohort comprising of patients without ARHL was frequency-matched by age, sex, and co-morbidities, and the occurrence of dementia was evaluated in both cohorts. The ARHL cohort consisted of 4108 patients with ARHL and the control cohort consisted of 4013 frequency-matched patients without ARHL. The incidence of dementia [hazard ratio (HR), 1.30; 95% confidence interval (CI 1.14-1.49); P = 0.002] was higher among ARHL patients. Cox models showed that being female (HR, 1.34; 95% CI 1.07-1.68), as well as having co-morbidities, including chronic liver disease and cirrhosis, rheumatoid arthritis, hypertension, diabetes mellitus, stroke, head injury, chronic kidney disease, coronary artery disease, alcohol abuse/dependence, and tobacco abuse/dependence (HR, 1.27; 95% CI 1.11-1.45), were independent risk factors for dementia in ARHL patients. We found ARHL may be one of the early characteristics of dementia, and patients with hearing loss were at a higher risk of subsequent dementia. Clinicians should be more sensitive to dementia symptoms within the first 2 years following ARHL diagnosis. Further clinical studies of the relationship between dementia and ARHL may be necessary.

PMID: 28229293 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2lz7onU
via IFTTT

Marshall syndrome in a young child, a reality: Case report.

http:--pt.wkhealth.com-pt-pt-core-templa Related Articles

Marshall syndrome in a young child, a reality: Case report.

Medicine (Baltimore). 2016 Nov;95(44):e5065

Authors: Trandafir LM, Chiriac MI, Diaconescu S, Ioniuc I, Miron I, Rusu D

Abstract
BACKGROUND: Recurrent fever syndrome, known as the Marshall syndrome (MS), is a clinical entity that includes several clinical features, such as: fever (39-40°C) that occurs repeatedly at variable intervals (3-8 weeks) and in episodes of 3 to 6 days, cervical adenopathy, pharyngitis, and aphthous stomatitis. The diagnosis of MS is one of exclusions; laboratory data is nonspecific and no abnormalities correlated with MS have been detected thus far.
METHODS: The authors report the case of a 2-year-old girl admitted to a tertiary pediatric center for repeated episodes of fever with aphthous stomatitis and laterocervical adenopathy.
RESULTS: The child's case history raised the suspicion of MS, which was subsequently confirmed by exclusion of all the other differential diagnoses (recurrent tonsillitis, juvenile idiopathic arthritis, Behçet's disease, cyclic neutropenia, hyperglobulinemia D syndrome). After the 3 febrile episodes, bilateral tonsillectomy was performed based on the parents' consent, with favorable immediate and remote postoperative clinical outcomes. The diagnosis of MS is one based on exclusion, as laboratory data is nonspecific. We took into consideration other causes of recurrent fever (recurrent tonsillitis, infectious diseases, juvenile idiopathic arthritis, Behçet's disease, cyclic neutropenia, Familial Mediterranean fever syndrome, hyperglobulinemia D syndrome). In our case, MS criteria were met through clinical examination and the child's outcome. Subsequently, laboratory data helped us establish the MS diagnosis.
CONCLUSIONS: Pediatricians should consider the MS diagnosis in the context of recurrent fever episodes associated with at least one of the following symptoms: pharyngitis, cervical adenopathy or aphthous stomatitis. Despite the indication for tonsillectomy in young children being controversial, in this case the surgery led to the total remission of the disease.

PMID: 27858841 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2g88q8F
via IFTTT

Phosphodiesterase 4D gene polymorphisms in sudden sensorineural hearing loss.

http:--production.springer.de-OnlineReso Related Articles

Phosphodiesterase 4D gene polymorphisms in sudden sensorineural hearing loss.

Eur Arch Otorhinolaryngol. 2016 Sep;273(9):2403-9

Authors: Chien CY, Tai SY, Wang LF, Hsi E, Chang NC, Wang HM, Wu MT, Ho KY

Abstract
The phosphodiesterase 4D (PDE4D) gene has been reported as a risk gene for ischemic stroke. The vascular factors are between the hypothesized etiologies of sudden sensorineural hearing loss (SSNHL), and this genetic effect might be attributed for its role in SSNHL. We hypothesized that genetic variants of the PDE4D gene are associated with susceptibility to SSNHL. We conducted a case-control study with 362 SSNHL cases and 209 controls. Three single nucleotide polymorphisms (SNPs) were selected. The genotypes were determined using TaqMan technology. Hardy-Weinberg equilibrium (HWE) was tested for each SNP, and genetic effects were evaluated according to three inheritance modes. We carried out sex-specific analysis to analyze the overall data. All three SNPs were in HWE. When subjects were stratified by sex, the genetic effect was only evident in females but not in males. The TT genotype of rs702553 exhibited an adjusted odds ratio (OR) of 3.83 (95 % confidence interval = 1.46-11.18) (p = 0.006) in female SSNHL. The TT genotype of SNP rs702553 was associated with female SSNHL under the recessive model (p = 0.004, OR 3.70). In multivariate logistic regression analysis, TT genotype of rs702553 was significantly associated with female SSNHL (p = 0.0043, OR 3.70). These results suggest that PDE4D gene polymorphisms influence the susceptibility for the development of SSNHL in the southern Taiwanese female population.

PMID: 26521189 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mjWCR2
via IFTTT

To Ear and Hearing Peer Reviewers: Thank You

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ms0TB2
via IFTTT

On the Etiology of Listening Difficulties in Noise Despite Clinically Normal Audiograms

imageMany people with difficulties following conversations in noisy settings have “clinically normal” audiograms, that is, tone thresholds better than 20 dB HL from 0.1 to 8 kHz. This review summarizes the possible causes of such difficulties, and examines established as well as promising new psychoacoustic and electrophysiologic approaches to differentiate between them. Deficits at the level of the auditory periphery are possible even if thresholds remain around 0 dB HL, and become probable when they reach 10 to 20 dB HL. Extending the audiogram beyond 8 kHz can identify early signs of noise-induced trauma to the vulnerable basal turn of the cochlea, and might point to “hidden” losses at lower frequencies that could compromise speech reception in noise. Listening difficulties can also be a consequence of impaired central auditory processing, resulting from lesions affecting the auditory brainstem or cortex, or from abnormal patterns of sound input during developmental sensitive periods and even in adulthood. Such auditory processing disorders should be distinguished from (cognitive) linguistic deficits, and from problems with attention or working memory that may not be specific to the auditory modality. Improved diagnosis of the causes of listening difficulties in noise should lead to better treatment outcomes, by optimizing auditory training procedures to the specific deficits of individual patients, for example.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrWXjI
via IFTTT

Guest Editorial: Promoting Global Action On Hearing Loss: World Hearing Day

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2meInjM
via IFTTT

Background Noise Degrades Central Auditory Processing in Toddlers: Erratum

No abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrZ6vU
via IFTTT

Discourse Strategies and the Production of Prosody by Prelingually Deaf Adolescent Cochlear Implant Users

imageObjectives: The purpose of this study is to assess the use of discourse strategies and the production of prosody by prelingually deaf adolescent users of cochlear implants (CIs) when participating in a referential communication task. We predict that CI users will issue more directives (DIRs) and make less use of information requests (IRs) in completing the task than their normally hearing (NH) peers. We also predict that in signaling these IRs and DIRs, the CI users will produce F0 rises of lesser magnitude than the NH speakers. Design: Eight prelingually deaf adolescent CI users and 8 NH adolescents completed a referential communication task, where participants were required to direct their interlocutor around a map. Participants were aged from 12.0 to 14.2 years. The mean age at implantation for the CI group was 2.1 years (SD 0.9). The use of IRs, DIRs, acknowledgments, and comments was compared between the two groups. The use and magnitude of fundamental frequency (F0) rises on IRs and DIRs was also compared. Results: The CI users differed from the NH speakers in how they resolved communication breakdown. The CI users showed a preference for repeating DIRs, rather than seeking information as did the NH speakers. A nonparametric Mann–Whitney U test indicated that the CI users issued more DIRs (U = 8, p = 0.01), produced fewer IRs (U = 13, p = 0.05) and fewer acknowledgments (U = 5, p = 0.003) than their NH counterparts. The CI users also differed in how they used F0 rises as a prosodic cue to signal IRs and DIRs. The CI users produced larger F0 rises on DIRs than on IRs, a pattern opposite to that displayed by the NH speakers. An independent samples t-test revealed that the CI users produced smaller rises on IRs compared with those produced by the NH speakers [t(12) = −2.762, p = 0.02]. Conclusions: The CI users differed from the NH speakers in how they resolved communication breakdown. The CI users showed a preference for repeating DIRs, rather than seeking information to understand their interlocutor’s point of view. Their use of prosody to signal discourse function also differed from their NH peers. These differences may indicate a lack of understanding of how prosody is used to signal discourse modality by the CI users. This study highlights the need for further research focused on the interaction of prosody, discourse, and language comprehension.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrU8iS
via IFTTT

Psychological Therapy for People with Tinnitus: A Scoping Review of Treatment Components

imageBackground: Tinnitus is associated with depression and anxiety disorders, severely and adversely affecting the quality of life and functional health status for some people. With the dearth of clinical psychologists embedded in audiology services and the cessation of training for hearing therapists in the UK, it is left to audiologists to meet the psychological needs of many patients with tinnitus. However, there is no universally standardized training or manualized intervention specifically for audiologists across the whole UK public healthcare system and similar systems elsewhere across the world. Objectives: The primary aim of this scoping review was to catalog the components of psychological therapies for people with tinnitus, which have been used or tested by psychologists, so that they might inform the development of a standardized audiologist-delivered psychological intervention. Secondary aims of this article were to identify the types of psychological therapy for people with tinnitus, who were reported but not tested in any clinical trial, as well as the job roles of clinicians who delivered psychological therapy for people with tinnitus in the literature. Design: The authors searched the Cochrane Ear, Nose and Throat Disorders Group Trials Register; Cochrane Central Register of Controlled Trials; PubMed; EMBASE; CINAHL; LILACS; KoreaMed; IndMed; PakMediNet; CAB Abstracts; Web of Science; BIOSIS Previews; ISRCTN; ClinicalTrials.gov ; IC-TRP; and Google Scholar. In addition, the authors searched the gray literature including conference abstracts, dissertations, and editorials. No records were excluded on the basis of controls used, outcomes reached, timing, setting, or study design (except for reviews—of the search results. Records were included in which a psychological therapy intervention was reported to address adults (≤18 years) tinnitus-related distress. No restrictive criteria were placed upon the term tinnitus. Records were excluded in which the intervention included biofeedback, habituation, hypnosis, or relaxation as necessary parts of the treatment. Results: A total of 5043 records were retrieved of which 64 were retained. Twenty-five themes of components that have been included within a psychological therapy were identified, including tinnitus education, psychoeducation, evaluation treatment rationale, treatment planning, problem-solving behavioral intervention, thought identification, thought challenging, worry time, emotions, social comparison, interpersonal skills, self-concept, lifestyle advice, acceptance and defusion, mindfulness, attention, relaxation, sleep, sound enrichment, comorbidity, treatment reflection, relapse prevention, and common therapeutic skills. The most frequently reported psychological therapies were cognitive behavioral therapy, tinnitus education, and internet-delivered cognitive behavioral therapy. No records reported that an audiologist delivered any of these psychological therapies in the context of an empirical trial in which their role was clearly delineated from that of other clinicians. Conclusions: Scoping review methodology does not attempt to appraise the quality of evidence or synthesize the included records. Further research should therefore determine the relative importance of these different components of psychological therapies from the perspective of the patient and the clinician.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2msbf3X
via IFTTT

Optimizations for the Electrically-Evoked Stapedial Reflex Threshold Measurement in Cochlear Implant Recipients

imageObjective: The electrically-evoked stapedial reflex threshold (eSRT) has proven to be useful in setting upper stimulation levels of cochlear implant recipients. However, the literature suggests that the reflex can be difficult to observe in a significant percentage of the population. The primary goal of this investigation was to assess the difference in eSRT levels obtained with alternative acoustic admittance probe tone frequencies. Design: A repeated-measures design was used to examine the effect of 3 probe tone frequencies (226, 678, and 1000 Hz) on eSRT in 23 adults with cochlear implants. Results: The mean eSRT measured using the conventional probe tone of 226 Hz was significantly higher than the mean eSRT measured with use of 678 and 1000 Hz probe tones. The mean eSRT were 174, 167, and 165 charge units with use of 226, 678, and 1000 Hz probe tones, respectively. There was not a statistically significant difference between the average eSRTs for probe tones 678 and 1000 Hz. Twenty of 23 participants had eSRT at lower charge unit levels with use of either a 678 or 1000 Hz probe tone when compared with the 226 Hz probe tone. Two participants had eSRT measured with 678 or 1000 Hz probe tones that were equal in level to the eSRT measured with a 226 Hz probe tone. Only 1 participant had an eSRT that was obtained at a lower charge unit level with a 226 Hz probe tone relative to the eSRT obtained with a 678 and 1000 Hz probe tone. Conclusions: The results of this investigation demonstrate that the use of a standard 226 Hz probe tone is not ideal for measurement of the eSRT. The use of higher probe tone frequencies (i.e., 678 or 1000 Hz) resulted in lower eSRT levels when compared with the eSRT levels obtained with use of a 226 probe tone. In addition, 4 of the 23 participants included in this study did not have a measureable eSRT with use of a 226 Hz probe tone, but all of the participants had measureable eSRT with use of both the 678 and 1000 Hz probe tones. Additional work is required to understand the clinical implication of these changes in the context of cochlear implant programming.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2msdHra
via IFTTT

Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability—Implications for Cochlear Implant Candidacy

imageObjectives: At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design: The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age–sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results: Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions: Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ms19Qn
via IFTTT

Improving Mobile Phone Speech Recognition by Personalized Amplification: Application in People with Normal Hearing and Mild-to-Moderate Hearing Loss

imagePurpose: In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. Methods: This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. Results: The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. Conclusions: The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ms2cjB
via IFTTT

Auditory Distraction and Acclimatization to Hearing Aids

imageObjective: It is widely recognized by hearing aid users and audiologists that a period of auditory acclimatization and adjustment is needed for new users to become accustomed to their devices. The aim of the present study was to test the idea that auditory acclimatization and adjustment to hearing aids involves a process of learning to “tune out” newly audible but undesirable sounds, which are described by new hearing aid users as annoying and distracting. It was hypothesized that (1) speech recognition thresholds in noise would improve over time for new hearing aid users, (2) distractibility to noise would reduce over time for new hearing aid users, (3) there would be a correlation between improved speech recognition in noise and reduced distractibility to background sounds, (4) improvements in speech recognition and distraction would be accompanied by self-report of reduced annoyance, and (5) improvements in speech recognition and distraction would be associated with higher general cognitive ability and more hearing aid use. Design: New adult hearing aid users (n = 35) completed a test of aided speech recognition in noise (SIN) and a test of auditory distraction by background sound amplified by hearing aids on the day of fitting and 1, 7, 14, and 30 days post fitting. At day 30, participants completed self-ratings of the annoyance of amplified sounds. Daily hearing aid use was measured via hearing aid data logging, and cognitive ability was measured with the Wechsler Abbreviated Scale of Intelligence block design test. A control group of experienced hearing aid users (n = 20) completed the tests over a similar time frame. Results: At day 30, there was no statistically significant improvement in SIN among new users versus experienced users. However, levels of hearing loss and hearing aid use varied widely among new users. A subset of new users with moderate hearing loss who wore their hearing aids at least 6 hr/day (n = 10) had significantly improved SIN (by ~3-dB signal to noise ratio), compared with a control group of experienced hearing aid users. Improvements in SIN were associated with more consistent HA use and more severe hearing loss. No improvements in the test of auditory distraction by background sound were observed. Improvements in SIN were associated with self-report of background sound being less distracting and greater self-reported hearing aid benefit. There was no association between improvements in SIN and cognitive ability or between SIN and auditory distraction. Conclusions: Improvements in SIN were accompanied by self-report of background sounds being less intrusive, consistent with auditory acclimatization involving a process of learning to “tune out” newly audible unwanted sounds. More severe hearing loss may afford the room for improvement required to show better SIN performance with time. Consistent hearing aid use may facilitate acclimatization to hearing aids and better SIN performance.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2msbeNr
via IFTTT

Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials

imageObjectives: Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Design: Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Results: Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Conclusions: Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrZfiZ
via IFTTT

Auditory Performance and Electrical Stimulation Measures in Cochlear Implant Recipients With Auditory Neuropathy Compared With Severe to Profound Sensorineural Hearing Loss

imageObjectives: The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). Design: The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. Results: The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Conclusions: Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrYaYf
via IFTTT

Fast Click Rate Electrocochleography and Auditory Brainstem Response in Normal-Hearing Adults Using Continuous Loop Averaging Deconvolution

imageObjectives: Using the continuous loop averaging deconvolution (CLAD) technique for conventional electrocochleography (ECochG) and auditory brainstem response (ABR) recordings, the effects of testing at high stimulus rates may have the potential to diagnose disorders of the inner ear and auditory nerve. First, a body of normative data using the CLAD technique must be established. Design: Extratympanic click ECochG and ABR to seven stimulus rates using CLAD were measured simultaneously from a tympanic membrane electrode and surface electrodes on the forehead and mastoid of 42 healthy individuals. Results: Results showed that the compound action potential (AP) of the ECochG and waves I, III, and V of the ABR decreased in amplitude and increased in latency as stimulus rate was increased from standard 7.1 clicks/s up to 507.81 clicks/s, with sharp reduction in AP amplitude at 97.66 clicks/s and reaching asymptote at 292.97 clicks/s. The summating potential (SP) of the ECochG, however, stayed relatively stable, resulting in increased SP/AP ratios with increasing rate. The SP/AP amplitude ratio showed more stability than AP amplitude findings, thus it is recommended for use in evaluation of cochlear and neural response. Conclusions: Results of both amplitude and latency data from this normative neural adaptation function of the auditory pathway serves as guide for improving diagnostic utility of both ECochG and ABR using CLAD as a reliable technique in distinguishing inner ear and auditory nerve disorders.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ms0ZZx
via IFTTT

Characterizing Speech Intelligibility in Noise After Wide Dynamic Range Compression

imageObjectives: The effects of nonlinear signal processing on speech intelligibility in noise are difficult to evaluate. Often, the effects are examined by comparing speech intelligibility scores with and without processing measured at fixed signal to noise ratios (SNRs) or by comparing the adaptive measured speech reception thresholds corresponding to 50% intelligibility (SRT50) with and without processing. These outcome measures might not be optimal. Measuring at fixed SNRs can be affected by ceiling or floor effects, because the range of relevant SNRs is not know in advance. The SRT50 is less time consuming, has a fixed performance level (i.e., 50% correct), but the SRT50 could give a limited view, because we hypothesize that the effect of most nonlinear signal processing algorithms at the SRT50 cannot be generalized to other points of the psychometric function. Design: In this article, we tested the value of estimating the entire psychometric function. We studied the effect of wide dynamic range compression (WDRC) on speech intelligibility in stationary, and interrupted speech-shaped noise in normal-hearing subjects, using a fast method-based local linear fitting approach and by two adaptive procedures. Results: The measured performance differences for conditions with and without WDRC for the psychometric functions in stationary noise and interrupted speech-shaped noise show that the effects of WDRC on speech intelligibility are SNR dependent. Conclusions: We conclude that favorable and unfavorable effects of WDRC on speech intelligibility can be missed if the results are presented in terms of SRT50 values only.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrUtSk
via IFTTT

Evaluating the Precision of Auditory Sensory Memory as an Index of Intrusion in Tinnitus

imageObjectives: The purpose of this study was to investigate the potential of measures of auditory short-term memory (ASTM) to provide a clinical measure of intrusion in tinnitus. Design: Response functions for six normal listeners on a delayed pitch discrimination task were contrasted in three conditions designed to manipulate attention in the presence and absence of simulated tinnitus: (1) no-tinnitus, (2) ignore-tinnitus, and (3) attend-tinnitus. Results: Delayed pitch discrimination functions were more variable in the presence of simulated tinnitus when listeners were asked to divide attention between the primary task and the amplitude of the tinnitus tone. Conclusions: Changes in the variability of auditory short-term memory may provide a novel means of quantifying the level of intrusion associated with the tinnitus percept during listening.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2msexnA
via IFTTT

Comparison of Different Electrode Configurations for the oVEMP With Bone-Conducted Vibration

imageObjectives: This study was performed to compare three electrode configurations for the ocular vestibular evoked myogenic potentials (oVEMPs)—“standard,” “sternum,” and “nose”—by making use of bone-conducted stimuli (at the level of Fz with a minishaker). In the second part, we compared the test–retest reliability of the standard and nose electrode configuration on the oVEMP parameters. Design: This study had a prospective design. Fourteen healthy subjects participated in the first part (4 males, 10 females; average age = 23.4 (SD = 2.6) years; age range 19.9 to 28.3 years) and second part (3 males, 11 females; average age = 22.7 (SD = 2.4) years; age range 20.0 to 28.0 years) of the study. OVEMPs were recorded making use of a hand-held bone conduction vibrator (minishaker). Tone bursts of 500 Hz (rise/fall time = 2 msec; plateau time = 2 msec; repetition rate = 5.1 Hz) were applied at a constant stimulus intensity level of 140 dB FL. Results: PART 1: The n10–p15 amplitude obtained with the standard electrode configuration (mean = 15.8 μV; SD = 6.3 μV) was significantly smaller than the amplitude measured with the nose (Z = −3.3; p = 0.001; mean = 35.0 μV; SD = 19.1 μV) and sternum (Z = −3.3; p = 0.001; mean = 27.1 μV; SD = 12.2 μV) electrode configuration. The p15 latency obtained with the nose electrode configuration (mean = 14.2 msec; SD = 0.54 msec) was significantly shorter than the p15 latency measured with the standard (Z = −3.08; p = 0.002) (mean = 14.9 msec; SD = 0.75 msec) and sternum (Z = −2.98; p = 0.003; mean = 15.4 msec; SD = 1.07 msec) electrode configuration. There were no differences between the n10 latencies of the three electrode configurations. The 95% prediction intervals (given by the mean ± 1.96 * SD) for the different interocular ratio values were [−41.2; 41.2], [−37.2; 37.2], and [−25.9; 25.9] for standard, sternum, and nose electrode configurations, respectively. PART 2: Intraclass correlation (ICC) values calculated for the oVEMP parameters obtained with the standard electrode configuration showed fair to good reliability for the parameters n10–p15 amplitude (ICC = 0.51), n10 (ICC = 0.52), and p15 (ICC = 0.60) latencies. The ICC values obtained for the parameters acquired with the nose electrode configuration demonstrated a poor reliability for the n10 latency (ICC = 0.37), a fair to good reliability for the p15 latency (ICC = 0.47) and an excellent reliability for the n10–p15 amplitude (ICC = 0.85). Conclusions: This study showed the possible benefits from alternative electrode configurations for measuring bone-conducted-evoked oVEMPs in comparison with the standard electrode configuration. The nose configuration seems promising, but further research is required to justify clinical use of this placement.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2ms2bfx
via IFTTT

Effects of Long-Term Musical Training on Cortical Auditory Evoked Potentials

imageObjective: Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. Design: Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. Results: As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Conclusions: Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal-hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric or hearing-impaired listeners.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/2mrY8Q7
via IFTTT