Τετάρτη 18 Απριλίου 2018

Beyond the audiogram: application of models of auditory fitness for duty to assess communication in the real world

Volume 57, Issue 5, May 2018, Page 321-322
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2HLzvtF
via IFTTT

Evidence-based occupational hearing screening II: validation of a screening methodology using measures of functional hearing ability

Volume 57, Issue 5, May 2018, Page 323-334
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2EXiXMo
via IFTTT

Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

Purpose
The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces.
Method
Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory–acoustic vowel space and the vowel space hull area were also calculated.
Results
Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants.
Conclusions
These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EXtO98
via IFTTT

Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment

Purpose
This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech.
Method
The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI).
Results
In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH.
Conclusions
Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so.
Supplemental Material
https://doi.org/10.23641/asha.6118817

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HLjdRA
via IFTTT

The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon)

Purpose
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon.
Method
A total of 460 participants aged 3–5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist.
Results
Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%.
Conclusion
Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EXRjyE
via IFTTT

A Systematic Review of Semantic Feature Analysis Therapy Studies for Aphasia

Purpose
The purpose of this study was to review treatment studies of semantic feature analysis (SFA) for persons with aphasia. The review documents how SFA is used, appraises the quality of the included studies, and evaluates the efficacy of SFA.
Method
The following electronic databases were systematically searched (last search February 2017): Academic Search Complete, CINAHL Plus, E-journals, Health Policy Reference Centre, MEDLINE, PsycARTICLES, PsycINFO, and SocINDEX. The quality of the included studies was rated. Clinical efficacy was determined by calculating effect sizes (Cohen's d) or percent of nonoverlapping data when d could not be calculated.
Results
Twenty-one studies were reviewed reporting on 55 persons with aphasia. SFA was used in 6 different types of studies: confrontation naming of nouns, confrontation naming of verbs, connected speech/discourse, group, multilingual, and studies where SFA was compared with other approaches. The quality of included studies was high (Single Case Experimental Design Scale average [range] = 9.55 [8.0–11]). Naming of trained items improved for 45 participants (81.82%). Effect sizes indicated that there was a small treatment effect.
Conclusions
SFA leads to positive outcomes despite the variability of treatment procedures, dosage, duration, and variations to the traditional SFA protocol. Further research is warranted to examine the efficacy of SFA and generalization effects in larger controlled studies.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HLjcNw
via IFTTT

The Role of Language in Nonlinguistic Stimuli: Comparing Inhibition in Children With Language Impairment

Purpose
There is conflicting evidence regarding if and how a deficit in executive function may be associated with developmental language impairment (LI). Nonlinguistic stimuli are now frequently used when testing executive function to avoid a language confound. However, it is possible that increased stimulus processing demands for nonlinguistic stimuli may also compound the complexity of the relationship between executive function and LI. The current study examined whether variability across nonlinguistic auditory stimuli might differentially affect inhibition and whether performance differs between children with and without language difficulties.
Method
Sixty children, aged 8–14 years, took part in the study: 20 typically developing children, 20 children with autism spectrum disorder, and 20 children with specific LI. For the purposes of assessing the role of language, children were further categorized based on language ability: 33 children with normal-language (NL) ability and 27 children with LI. Children completed a go/no-go task with 2 conditions comparing nonlinguistic auditory stimuli: 2 abstract sounds and 2 familiar sounds (duck quack and dog bark).
Results
There was no significant difference for diagnostic category. However, there was a significant interaction between language ability and condition. There was no significant difference in the NL group performance in the abstract and familiar sound conditions. In contrast, the group with LI made significantly more errors in the abstract condition compared with the familiar condition. There was no significant difference in inhibition between the NL group and the group with LI in the familiar condition; however, the group with LI made significantly more errors than the NL group in the abstract condition.
Conclusions
Caution is needed in stimuli selection when examining executive function skills because, although stimuli may be selected on the basis of being “nonlinguistic and auditory,” the type of stimuli chosen can differentially affect performance. The findings have implications for the interpretation of deficits in executive function as well as the selection of stimuli in future studies.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EYAtjd
via IFTTT

Auditory–Perceptual Assessment of Fluency in Typical and Neurologically Disordered Speech

Purpose
The aim of this study is to investigate how speech fluency in typical and atypical speech is perceptually assessed by speech-language pathologists (SLPs). Our research questions were as follows: (a) How do SLPs rate fluency in speakers with and without neurological communication disorders? (b) Do they differentiate the speaker groups? and (c) What features do they hear impairing speech fluency?
Method
Ten SLPs specialized in neurological communication disorders volunteered as expert judges to rate 90 narrative speech samples on a Visual Analogue Scale (see Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009; p. 127). The samples—randomly mixed—were from 70 neurologically healthy speakers (the control group) and 20 speakers with traumatic brain injury, 10 of whom had neurogenic stuttering (designated as Clinical Groups A and B).
Results
The fluency rates were higher for typical speakers than for speakers with traumatic brain injury; however, the agreement among the judges was higher for atypical fluency. Auditory–perceptual assessment of fluency was significantly impaired by the features of stuttering and something else but not by speech rate. Stuttering was also perceived in speakers not diagnosed as stutterers. A borderline between typical and atypical fluency was found.
Conclusions
Speech fluency is a multifaceted phenomenon, and on the basis of this study, we suggest a more general approach to fluency and its deviations that will take into account, in addition to the motor and linguistic aspects of fluency, the metalinguistic component of expression as well. The results of this study indicate a need for further studies on the precise nature of borderline fluency and its different disfluencies.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HKhjAz
via IFTTT

Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs

Purpose
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.
Method
Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.
Results
The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome.
Conclusion
The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.
Supplemental Material
https://doi.org/10.23641/asha.6066545

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2vt9GMw
via IFTTT

Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

Purpose
The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces.
Method
Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory–acoustic vowel space and the vowel space hull area were also calculated.
Results
Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants.
Conclusions
These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXtO98
via IFTTT

Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment

Purpose
This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech.
Method
The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI).
Results
In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH.
Conclusions
Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so.
Supplemental Material
https://doi.org/10.23641/asha.6118817

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjdRA
via IFTTT

The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon)

Purpose
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon.
Method
A total of 460 participants aged 3–5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist.
Results
Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%.
Conclusion
Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXRjyE
via IFTTT

A Systematic Review of Semantic Feature Analysis Therapy Studies for Aphasia

Purpose
The purpose of this study was to review treatment studies of semantic feature analysis (SFA) for persons with aphasia. The review documents how SFA is used, appraises the quality of the included studies, and evaluates the efficacy of SFA.
Method
The following electronic databases were systematically searched (last search February 2017): Academic Search Complete, CINAHL Plus, E-journals, Health Policy Reference Centre, MEDLINE, PsycARTICLES, PsycINFO, and SocINDEX. The quality of the included studies was rated. Clinical efficacy was determined by calculating effect sizes (Cohen's d) or percent of nonoverlapping data when d could not be calculated.
Results
Twenty-one studies were reviewed reporting on 55 persons with aphasia. SFA was used in 6 different types of studies: confrontation naming of nouns, confrontation naming of verbs, connected speech/discourse, group, multilingual, and studies where SFA was compared with other approaches. The quality of included studies was high (Single Case Experimental Design Scale average [range] = 9.55 [8.0–11]). Naming of trained items improved for 45 participants (81.82%). Effect sizes indicated that there was a small treatment effect.
Conclusions
SFA leads to positive outcomes despite the variability of treatment procedures, dosage, duration, and variations to the traditional SFA protocol. Further research is warranted to examine the efficacy of SFA and generalization effects in larger controlled studies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjcNw
via IFTTT

The Role of Language in Nonlinguistic Stimuli: Comparing Inhibition in Children With Language Impairment

Purpose
There is conflicting evidence regarding if and how a deficit in executive function may be associated with developmental language impairment (LI). Nonlinguistic stimuli are now frequently used when testing executive function to avoid a language confound. However, it is possible that increased stimulus processing demands for nonlinguistic stimuli may also compound the complexity of the relationship between executive function and LI. The current study examined whether variability across nonlinguistic auditory stimuli might differentially affect inhibition and whether performance differs between children with and without language difficulties.
Method
Sixty children, aged 8–14 years, took part in the study: 20 typically developing children, 20 children with autism spectrum disorder, and 20 children with specific LI. For the purposes of assessing the role of language, children were further categorized based on language ability: 33 children with normal-language (NL) ability and 27 children with LI. Children completed a go/no-go task with 2 conditions comparing nonlinguistic auditory stimuli: 2 abstract sounds and 2 familiar sounds (duck quack and dog bark).
Results
There was no significant difference for diagnostic category. However, there was a significant interaction between language ability and condition. There was no significant difference in the NL group performance in the abstract and familiar sound conditions. In contrast, the group with LI made significantly more errors in the abstract condition compared with the familiar condition. There was no significant difference in inhibition between the NL group and the group with LI in the familiar condition; however, the group with LI made significantly more errors than the NL group in the abstract condition.
Conclusions
Caution is needed in stimuli selection when examining executive function skills because, although stimuli may be selected on the basis of being “nonlinguistic and auditory,” the type of stimuli chosen can differentially affect performance. The findings have implications for the interpretation of deficits in executive function as well as the selection of stimuli in future studies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EYAtjd
via IFTTT

Auditory–Perceptual Assessment of Fluency in Typical and Neurologically Disordered Speech

Purpose
The aim of this study is to investigate how speech fluency in typical and atypical speech is perceptually assessed by speech-language pathologists (SLPs). Our research questions were as follows: (a) How do SLPs rate fluency in speakers with and without neurological communication disorders? (b) Do they differentiate the speaker groups? and (c) What features do they hear impairing speech fluency?
Method
Ten SLPs specialized in neurological communication disorders volunteered as expert judges to rate 90 narrative speech samples on a Visual Analogue Scale (see Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009; p. 127). The samples—randomly mixed—were from 70 neurologically healthy speakers (the control group) and 20 speakers with traumatic brain injury, 10 of whom had neurogenic stuttering (designated as Clinical Groups A and B).
Results
The fluency rates were higher for typical speakers than for speakers with traumatic brain injury; however, the agreement among the judges was higher for atypical fluency. Auditory–perceptual assessment of fluency was significantly impaired by the features of stuttering and something else but not by speech rate. Stuttering was also perceived in speakers not diagnosed as stutterers. A borderline between typical and atypical fluency was found.
Conclusions
Speech fluency is a multifaceted phenomenon, and on the basis of this study, we suggest a more general approach to fluency and its deviations that will take into account, in addition to the motor and linguistic aspects of fluency, the metalinguistic component of expression as well. The results of this study indicate a need for further studies on the precise nature of borderline fluency and its different disfluencies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HKhjAz
via IFTTT

Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs

Purpose
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.
Method
Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.
Results
The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome.
Conclusion
The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.
Supplemental Material
https://doi.org/10.23641/asha.6066545

from #Audiology via ola Kala on Inoreader https://ift.tt/2vt9GMw
via IFTTT

Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

Purpose
The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces.
Method
Young adult speakers produced 3 repetitions of 2 different sentences at 3 different loudness levels. Lingual kinematic and acoustic signals were collected and analyzed. Acoustic and kinematic variants of several vowel space metrics were calculated from the formant frequencies and the position of 2 lingual markers. Traditional metrics included triangular vowel space area and the vowel articulation index. Acoustic and kinematic variants of sentence-level metrics based on the articulatory–acoustic vowel space and the vowel space hull area were also calculated.
Results
Both acoustic and kinematic variants of the sentence-level metrics significantly increased with an increase in loudness, whereas no statistically significant differences in traditional vowel-point metrics were observed for either the kinematic or acoustic variants across the 3 loudness conditions. In addition, moderate-to-strong relationships between the acoustic and kinematic variants of the sentence-level vowel space metrics were observed for the majority of participants.
Conclusions
These data suggest that both kinematic and acoustic vowel space metrics that reflect the dynamic contributions of both consonant and vowel segments are sensitive to within-speaker changes in articulation associated with manipulations of speech intensity.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXtO98
via IFTTT

Children's Acoustic and Linguistic Adaptations to Peers With Hearing Impairment

Purpose
This study aims to examine the clear speaking strategies used by older children when interacting with a peer with hearing loss, focusing on both acoustic and linguistic adaptations in speech.
Method
The Grid task, a problem-solving task developed to elicit spontaneous interactive speech, was used to obtain a range of global acoustic and linguistic measures. Eighteen 9- to 14-year-old children with normal hearing (NH) performed the task in pairs, once with a friend with NH and once with a friend with a hearing impairment (HI).
Results
In HI-directed speech, children increased their fundamental frequency range and midfrequency intensity, decreased the number of words per phrase, and expanded their vowel space area by increasing F1 and F2 range, relative to NH-directed speech. However, participants did not appear to make changes to their articulation rate, the lexical frequency of content words, or lexical diversity when talking to their friend with HI compared with their friend with NH.
Conclusions
Older children show evidence of listener-oriented adaptations to their speech production; although their speech production systems are still developing, they are able to make speech adaptations to benefit the needs of a peer with HI, even without being given a specific instruction to do so.
Supplemental Material
https://doi.org/10.23641/asha.6118817

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjdRA
via IFTTT

The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon)

Purpose
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon.
Method
A total of 460 participants aged 3–5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist.
Results
Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%.
Conclusion
Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EXRjyE
via IFTTT

A Systematic Review of Semantic Feature Analysis Therapy Studies for Aphasia

Purpose
The purpose of this study was to review treatment studies of semantic feature analysis (SFA) for persons with aphasia. The review documents how SFA is used, appraises the quality of the included studies, and evaluates the efficacy of SFA.
Method
The following electronic databases were systematically searched (last search February 2017): Academic Search Complete, CINAHL Plus, E-journals, Health Policy Reference Centre, MEDLINE, PsycARTICLES, PsycINFO, and SocINDEX. The quality of the included studies was rated. Clinical efficacy was determined by calculating effect sizes (Cohen's d) or percent of nonoverlapping data when d could not be calculated.
Results
Twenty-one studies were reviewed reporting on 55 persons with aphasia. SFA was used in 6 different types of studies: confrontation naming of nouns, confrontation naming of verbs, connected speech/discourse, group, multilingual, and studies where SFA was compared with other approaches. The quality of included studies was high (Single Case Experimental Design Scale average [range] = 9.55 [8.0–11]). Naming of trained items improved for 45 participants (81.82%). Effect sizes indicated that there was a small treatment effect.
Conclusions
SFA leads to positive outcomes despite the variability of treatment procedures, dosage, duration, and variations to the traditional SFA protocol. Further research is warranted to examine the efficacy of SFA and generalization effects in larger controlled studies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HLjcNw
via IFTTT

The Role of Language in Nonlinguistic Stimuli: Comparing Inhibition in Children With Language Impairment

Purpose
There is conflicting evidence regarding if and how a deficit in executive function may be associated with developmental language impairment (LI). Nonlinguistic stimuli are now frequently used when testing executive function to avoid a language confound. However, it is possible that increased stimulus processing demands for nonlinguistic stimuli may also compound the complexity of the relationship between executive function and LI. The current study examined whether variability across nonlinguistic auditory stimuli might differentially affect inhibition and whether performance differs between children with and without language difficulties.
Method
Sixty children, aged 8–14 years, took part in the study: 20 typically developing children, 20 children with autism spectrum disorder, and 20 children with specific LI. For the purposes of assessing the role of language, children were further categorized based on language ability: 33 children with normal-language (NL) ability and 27 children with LI. Children completed a go/no-go task with 2 conditions comparing nonlinguistic auditory stimuli: 2 abstract sounds and 2 familiar sounds (duck quack and dog bark).
Results
There was no significant difference for diagnostic category. However, there was a significant interaction between language ability and condition. There was no significant difference in the NL group performance in the abstract and familiar sound conditions. In contrast, the group with LI made significantly more errors in the abstract condition compared with the familiar condition. There was no significant difference in inhibition between the NL group and the group with LI in the familiar condition; however, the group with LI made significantly more errors than the NL group in the abstract condition.
Conclusions
Caution is needed in stimuli selection when examining executive function skills because, although stimuli may be selected on the basis of being “nonlinguistic and auditory,” the type of stimuli chosen can differentially affect performance. The findings have implications for the interpretation of deficits in executive function as well as the selection of stimuli in future studies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2EYAtjd
via IFTTT

Auditory–Perceptual Assessment of Fluency in Typical and Neurologically Disordered Speech

Purpose
The aim of this study is to investigate how speech fluency in typical and atypical speech is perceptually assessed by speech-language pathologists (SLPs). Our research questions were as follows: (a) How do SLPs rate fluency in speakers with and without neurological communication disorders? (b) Do they differentiate the speaker groups? and (c) What features do they hear impairing speech fluency?
Method
Ten SLPs specialized in neurological communication disorders volunteered as expert judges to rate 90 narrative speech samples on a Visual Analogue Scale (see Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009; p. 127). The samples—randomly mixed—were from 70 neurologically healthy speakers (the control group) and 20 speakers with traumatic brain injury, 10 of whom had neurogenic stuttering (designated as Clinical Groups A and B).
Results
The fluency rates were higher for typical speakers than for speakers with traumatic brain injury; however, the agreement among the judges was higher for atypical fluency. Auditory–perceptual assessment of fluency was significantly impaired by the features of stuttering and something else but not by speech rate. Stuttering was also perceived in speakers not diagnosed as stutterers. A borderline between typical and atypical fluency was found.
Conclusions
Speech fluency is a multifaceted phenomenon, and on the basis of this study, we suggest a more general approach to fluency and its deviations that will take into account, in addition to the motor and linguistic aspects of fluency, the metalinguistic component of expression as well. The results of this study indicate a need for further studies on the precise nature of borderline fluency and its different disfluencies.

from #Audiology via ola Kala on Inoreader https://ift.tt/2HKhjAz
via IFTTT

Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs

Purpose
A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support.
Method
Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified.
Results
The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome.
Conclusion
The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed.
Supplemental Material
https://doi.org/10.23641/asha.6066545

from #Audiology via ola Kala on Inoreader https://ift.tt/2vt9GMw
via IFTTT

Vestibular Manifestations in Subjects With Enlarged Vestibular Aqueduct.

Vestibular Manifestations in Subjects With Enlarged Vestibular Aqueduct.

Otol Neurotol. 2018 Apr 16;:

Authors: Song JJ, Hong SK, Lee SY, Park SJ, Kang SI, An YH, Jang JH, Kim JS, Koo JW

Abstract
OBJECTIVE: To describe the results of a thorough evaluation in a large series of patients with an enlarged vestibular aqueduct (EVA), focusing on vestibular manifestations with etiological considerations.
STUDY DESIGN: Retrospective chart review of patients with EVA.
SETTING: Tertiary referral center.
PATIENTS: A total of 22 EVA patients with a median age of 8 years (6 mo-35 yr) who underwent both audiovestibular and radiologic examinations.
MAIN OUTCOME MEASURES: Patient demographics, radiologic findings, audiologic results, vestibular symptoms, findings of neurotologic examinations, and laboratory evaluations were collected and analyzed. Standard descriptive statistics were used to summarize patient characteristics. Subjects who had a history of vertigo attack were categorized as "vestibulopathy group," while subjects without any history of vertigo as "non-vestibulopathy group."
RESULTS: Of the 41 ears included, 37 (90.2%) had hearing loss on initial audiometric evaluations. Among the 22 patients, 14 (63.6%) complained of dizziness. Of the 14 vertiginous patients, seven had recurrent episodes, five had a history of single attack, and two presented with postural imbalances. There were no significant differences between vestibulopathy and non-vestibulopathy groups with regard to the relationship between the development of vestibular symptoms and aqueductal size, hearing threshold, or age at first visit. Four of the 22 (18.2%) patients developed secondary benign paroxysmal positional vertigo (BPPV) and all patients complained of simultaneous decreases in hearing.
CONCLUSIONS: Our results demonstrate that patients may develop vestibular symptoms during their clinical course, and all patients with an enlarged vestibular aqueduct should be cautioned regarding the potential development of vestibular pathology. Moreover, the non-negligible incidence of secondary BPPV mandates positional tests when evaluating EVA patients with vertigo.

PMID: 29664869 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2HHRkdc
via IFTTT

A Novel GJB2 compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) causes sensorineural hearing loss in a Chinese family.

A Novel GJB2 compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) causes sensorineural hearing loss in a Chinese family.

J Clin Lab Anal. 2018 Apr 17;:e22444

Authors: Shi X, Zhang Y, Qiu S, Zhuang W, Yuan N, Sun T, Gao J, Qiao Y, Liu K

Abstract
OBJECTIVE: To investigate whether a novel compound heterozygous mutations c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) in GJB2 result in hearing loss.
METHODS: Allele-specific PCR-based universal array (ASPUA) screening and sequence analysis were applied to identify these mutations. 3D model was built to perform molecular dynamics (MD) simulation to verify the susceptibility of the mutations. Furthermore, WT- and Mut-GJB2 DNA fragments, containing the mutation of c.257C>G and c.176del16 were respectively cloned and transfected into HEK293 and spiral ganglion neuron cell (SGNs) by lenti-virus delivery system to indicate the subcellular localization of the WT- and Mut-CX26 protein.
RESULTS: A novel compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) in GJB2 was identified in a Chinese family, in which 4 siblings with profound hearing loss, but the fifth child is normal. By ASPUA screening and sequencing, a compound heterozygote mutations in GJB2 c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) were identified in these four deaf children, each of the mutated GJB2 gene were inherited from their parents. There is no mutation of GJB2 gene identified in the normal child. Besides, the compound heterozygous mutation GJB2 c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) could lead to the alterations of the subcellular localization of each corresponding mutated CX26 protein and could cause the hearing loss, which has been predicted by MD simulation and verified in both 293T and SGNs cell line.
CONCLUSION: The c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) compound mutations in GJB2 detected in this study are novel, and which may be associated with hearing loss in this Chinese family.

PMID: 29665173 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2H8Kf8p
via IFTTT

The Plasma Membrane Calcium ATPases in Calcium Signaling Network.

The Plasma Membrane Calcium ATPases in Calcium Signaling Network.

Curr Protein Pept Sci. 2018 Apr 16;:

Authors: Wu X, Weng L, Zhang J, Liu X, Huang J

Abstract
The plasma membrane Ca2+ ATPases (PMCAs) are responsible for the clearance of Ca2+ out of cells after intracellular Ca2+ transients. Cooperating with Na+/Ca2+ exchangers (NCXs) and Ca2+ buffering proteins, PMCAs play an essential role in maintaining the long-term cellular Ca2+ homeostasis. The plasma membrane Ca2+ ATPase was first discovered in red blood cell membrane about 50 years ago, and then other PMCA isoforms and alternatively spliced variants had been identified from different tissues and different developmental stages, revealing a surprising complexity of the PMCA family. In mammals, there are four PMCA isoforms encoded by four distinct genes. Isoform 1 and 4 are found in virtually all tissues, whereas isoform 2 and 3 are primarily expressed in excitable cells such as neurons and myocytes. Perturbation of PMCAs function has been implicated in a variety of diseases and disorders, including hearing loss, ataxia, paraplegia, and infertility. Here, we would like to review the recent progresses in the study of the PMCAs and related disorders, in particular how these pathological conditions help us to gain an in-depth insight into the function of PMCAs and their contribution in the regulation of Ca2+ signaling network.

PMID: 29663880 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2JTiZZc
via IFTTT

Identification of variants in the mitochondrial lysine-tRNA (MT-TK) gene in myoclonic epilepsy-pathogenicity evaluation and structural characterization by in silico approach.

Related Articles

Identification of variants in the mitochondrial lysine-tRNA (MT-TK) gene in myoclonic epilepsy-pathogenicity evaluation and structural characterization by in silico approach.

J Cell Biochem. 2018 Apr 16;:

Authors: Nadeem MS, Ahmad H, Mohammed K, Muhammad K, Ullah I, Baothman OAS, Ali N, Anwar F, Zamzami MA, Shakoori AR

Abstract
Variations in mitochondrial genes have an established link with myoclonic epilepsy. In the present study we evaluated the nucleotide sequence of MT-TK gene of 52 individuals from 12 unrelated families and reported three variations in 2 of the 13 epileptic patients. The DNA sequences coding for MT-TK gene were sequenced and mutations were detected in all participants. The mutations were further analyzed by the in silico analysis and their structural and pathogenic effects were determined. All the investigated patients had symptoms of myoclonus, 61.5% were positive for ataxia, 23.07% were suffering from hearing loss, 15.38% were having mild to severe dementia, 69.23% were males, and 61.53% had cousin marriage in their family history. DNA extracted from saliva was used for the PCR amplification of a 440 bp DNA fragment encompassing complete MT-TK gene. The nucleotide sequence analysis revealed three mutations, m.8306T>C, m.8313G>C, and m.8362T>G that are divergent from available reports. The identified mutations designate the heteroplasmic condition. Furthermore, pathogenicity of the identified variants was predicted by in silico tools viz., PON-mt-tRNA and MitoTIP. Secondary structure of altered MT-TK was predicted by RNAStructure web server. Studies by MitoTIP and PON-mt-tRNA tools have provided strong evidences of pathogenic effects of these mutations. Single nucleotide variations resulted in disruptive secondary structure of mutant MT-TK models, as predicted by RNAStructure. In vivo confirmation of structural and pathogenic effects of identified mutations in the animal models can be prolonged on the basis of these findings.

PMID: 29663531 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader https://ift.tt/2J5QEh5
via IFTTT

Effects of Hearing Loss and Fast-Acting Compression on Amplitude Modulation Perception and Speech Intelligibility

Objective: The purpose was to investigate the effects of hearing-loss and fast-acting compression on speech intelligibility and two measures of temporal modulation sensitivity. Design: Twelve adults with normal hearing (NH) and 16 adults with mild to moderately severe sensorineural hearing loss were tested. Amplitude modulation detection and modulation-depth discrimination (MDD) thresholds with sinusoidal carriers of 1 or 5 kHz and modulators in the range from 8 to 256 Hz were used as measures of temporal modulation sensitivity. Speech intelligibility was assessed by obtaining speech reception thresholds in stationary and fluctuating background noise. All thresholds were obtained with and without compression (using a fixed compression ratio of 2:1). Results: For modulation detection, the thresholds were similar or lower for the group with hearing loss than for the group with NH. In contrast, the MDD thresholds were higher for the group with hearing loss than for the group with NH. Fast-acting compression increased the modulation detection thresholds, while no effect of compression on the MDD thresholds was observed. The speech reception thresholds obtained in stationary noise were slightly increased in the compression condition relative to the linear processing condition, whereas no difference in the speech reception thresholds obtained in fluctuating noise was observed. For the group with NH, individual differences in the MDD thresholds could account for 72% of the variability in the speech reception thresholds obtained in stationary noise, whereas the correlation was insignificant for the hearing-loss group. Conclusions: Fast-acting compression can restore modulation detection thresholds for listeners with hearing loss to the values observed for listeners with NH. Despite this normalization of the modulation detection thresholds, compression does not seem to provide a benefit for speech intelligibility. Furthermore, fast-acting compression may not be able to restore MDD thresholds to the values observed for listeners with NH, suggesting that the two measures of amplitude modulation sensitivity represent different aspects of temporal processing. For listeners with NH, the ability to discriminate modulation depth was highly correlated with speech intelligibility in stationary noise. ACKNOWLEDGMENTS: We thank Nicoline Thorup and Pernille Holtegaard for their assistance in recruiting the listeners with hearing loss. We thank the Audiological Department at Bispebjerg Hospital for providing support through their facilities and staff. Many thanks to Brian Moore and two anonymous reviewers for their very helpful feedback on earlier versions of this paper. This project was carried at the Centre for Applied Hearing Research (CAHR) supported by Widex, Oticon, GN ReSound and the Technical University of Denmark. The authors have no conflicts of interest to disclose. Address for correspondence: Alan Wiinberg, Department of Electrical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark. E-mail: alwiin@elektro.dtu.dk Received January 18, 2017; accepted February 24, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ha4ApK
via IFTTT

Effects of Hearing Loss and Fast-Acting Compression on Amplitude Modulation Perception and Speech Intelligibility

Objective: The purpose was to investigate the effects of hearing-loss and fast-acting compression on speech intelligibility and two measures of temporal modulation sensitivity. Design: Twelve adults with normal hearing (NH) and 16 adults with mild to moderately severe sensorineural hearing loss were tested. Amplitude modulation detection and modulation-depth discrimination (MDD) thresholds with sinusoidal carriers of 1 or 5 kHz and modulators in the range from 8 to 256 Hz were used as measures of temporal modulation sensitivity. Speech intelligibility was assessed by obtaining speech reception thresholds in stationary and fluctuating background noise. All thresholds were obtained with and without compression (using a fixed compression ratio of 2:1). Results: For modulation detection, the thresholds were similar or lower for the group with hearing loss than for the group with NH. In contrast, the MDD thresholds were higher for the group with hearing loss than for the group with NH. Fast-acting compression increased the modulation detection thresholds, while no effect of compression on the MDD thresholds was observed. The speech reception thresholds obtained in stationary noise were slightly increased in the compression condition relative to the linear processing condition, whereas no difference in the speech reception thresholds obtained in fluctuating noise was observed. For the group with NH, individual differences in the MDD thresholds could account for 72% of the variability in the speech reception thresholds obtained in stationary noise, whereas the correlation was insignificant for the hearing-loss group. Conclusions: Fast-acting compression can restore modulation detection thresholds for listeners with hearing loss to the values observed for listeners with NH. Despite this normalization of the modulation detection thresholds, compression does not seem to provide a benefit for speech intelligibility. Furthermore, fast-acting compression may not be able to restore MDD thresholds to the values observed for listeners with NH, suggesting that the two measures of amplitude modulation sensitivity represent different aspects of temporal processing. For listeners with NH, the ability to discriminate modulation depth was highly correlated with speech intelligibility in stationary noise. ACKNOWLEDGMENTS: We thank Nicoline Thorup and Pernille Holtegaard for their assistance in recruiting the listeners with hearing loss. We thank the Audiological Department at Bispebjerg Hospital for providing support through their facilities and staff. Many thanks to Brian Moore and two anonymous reviewers for their very helpful feedback on earlier versions of this paper. This project was carried at the Centre for Applied Hearing Research (CAHR) supported by Widex, Oticon, GN ReSound and the Technical University of Denmark. The authors have no conflicts of interest to disclose. Address for correspondence: Alan Wiinberg, Department of Electrical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark. E-mail: alwiin@elektro.dtu.dk Received January 18, 2017; accepted February 24, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2Ha4ApK
via IFTTT

Effects of Hearing Loss and Fast-Acting Compression on Amplitude Modulation Perception and Speech Intelligibility

Objective: The purpose was to investigate the effects of hearing-loss and fast-acting compression on speech intelligibility and two measures of temporal modulation sensitivity. Design: Twelve adults with normal hearing (NH) and 16 adults with mild to moderately severe sensorineural hearing loss were tested. Amplitude modulation detection and modulation-depth discrimination (MDD) thresholds with sinusoidal carriers of 1 or 5 kHz and modulators in the range from 8 to 256 Hz were used as measures of temporal modulation sensitivity. Speech intelligibility was assessed by obtaining speech reception thresholds in stationary and fluctuating background noise. All thresholds were obtained with and without compression (using a fixed compression ratio of 2:1). Results: For modulation detection, the thresholds were similar or lower for the group with hearing loss than for the group with NH. In contrast, the MDD thresholds were higher for the group with hearing loss than for the group with NH. Fast-acting compression increased the modulation detection thresholds, while no effect of compression on the MDD thresholds was observed. The speech reception thresholds obtained in stationary noise were slightly increased in the compression condition relative to the linear processing condition, whereas no difference in the speech reception thresholds obtained in fluctuating noise was observed. For the group with NH, individual differences in the MDD thresholds could account for 72% of the variability in the speech reception thresholds obtained in stationary noise, whereas the correlation was insignificant for the hearing-loss group. Conclusions: Fast-acting compression can restore modulation detection thresholds for listeners with hearing loss to the values observed for listeners with NH. Despite this normalization of the modulation detection thresholds, compression does not seem to provide a benefit for speech intelligibility. Furthermore, fast-acting compression may not be able to restore MDD thresholds to the values observed for listeners with NH, suggesting that the two measures of amplitude modulation sensitivity represent different aspects of temporal processing. For listeners with NH, the ability to discriminate modulation depth was highly correlated with speech intelligibility in stationary noise. ACKNOWLEDGMENTS: We thank Nicoline Thorup and Pernille Holtegaard for their assistance in recruiting the listeners with hearing loss. We thank the Audiological Department at Bispebjerg Hospital for providing support through their facilities and staff. Many thanks to Brian Moore and two anonymous reviewers for their very helpful feedback on earlier versions of this paper. This project was carried at the Centre for Applied Hearing Research (CAHR) supported by Widex, Oticon, GN ReSound and the Technical University of Denmark. The authors have no conflicts of interest to disclose. Address for correspondence: Alan Wiinberg, Department of Electrical Engineering, Technical University of Denmark, DK-2800 Lyngby, Denmark. E-mail: alwiin@elektro.dtu.dk Received January 18, 2017; accepted February 24, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2Ha4ApK
via IFTTT

A Novel GJB2 compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) causes sensorineural hearing loss in a Chinese family.

A Novel GJB2 compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) causes sensorineural hearing loss in a Chinese family.

J Clin Lab Anal. 2018 Apr 17;:e22444

Authors: Shi X, Zhang Y, Qiu S, Zhuang W, Yuan N, Sun T, Gao J, Qiao Y, Liu K

Abstract
OBJECTIVE: To investigate whether a novel compound heterozygous mutations c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) in GJB2 result in hearing loss.
METHODS: Allele-specific PCR-based universal array (ASPUA) screening and sequence analysis were applied to identify these mutations. 3D model was built to perform molecular dynamics (MD) simulation to verify the susceptibility of the mutations. Furthermore, WT- and Mut-GJB2 DNA fragments, containing the mutation of c.257C>G and c.176del16 were respectively cloned and transfected into HEK293 and spiral ganglion neuron cell (SGNs) by lenti-virus delivery system to indicate the subcellular localization of the WT- and Mut-CX26 protein.
RESULTS: A novel compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) in GJB2 was identified in a Chinese family, in which 4 siblings with profound hearing loss, but the fifth child is normal. By ASPUA screening and sequencing, a compound heterozygote mutations in GJB2 c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) were identified in these four deaf children, each of the mutated GJB2 gene were inherited from their parents. There is no mutation of GJB2 gene identified in the normal child. Besides, the compound heterozygous mutation GJB2 c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) could lead to the alterations of the subcellular localization of each corresponding mutated CX26 protein and could cause the hearing loss, which has been predicted by MD simulation and verified in both 293T and SGNs cell line.
CONCLUSION: The c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) compound mutations in GJB2 detected in this study are novel, and which may be associated with hearing loss in this Chinese family.

PMID: 29665173 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2H8Kf8p
via IFTTT

Characterization of Hair Cell-Like Cells Converted From Supporting Cells After Notch Inhibition in Cultures of the Organ of Corti From Neonatal Gerbils.

Related Articles

Characterization of Hair Cell-Like Cells Converted From Supporting Cells After Notch Inhibition in Cultures of the Organ of Corti From Neonatal Gerbils.

Front Cell Neurosci. 2018;12:73

Authors: Li Y, Jia S, Liu H, Tateya T, Guo W, Yang S, Beisel KW, He DZZ

Abstract
The senses of hearing and balance depend upon hair cells, the sensory receptors of the inner ear. Hair cells transduce mechanical stimuli into electrical activity. Loss of hair cells as a result of aging or exposure to noise and ototoxic drugs is the major cause of noncongenital hearing and balance deficits. In the ear of non-mammals, lost hair cells can spontaneously be replaced by production of new hair cells from conversion of supporting cells. Although supporting cells in adult mammals have lost that capability, neonatal supporting cells are able to convert to hair cells after inhibition of Notch signaling. We questioned whether Notch inhibition is sufficient to convert supporting cells to functional hair cells using electrophysiology and electron microscopy. We showed that pharmacological inhibition of the canonical Notch pathway in the cultured organ of Corti prepared from neonatal gerbils induced stereocilia formation in supporting cells (defined as hair cell-like cells or HCLCs) and supernumerary stereocilia in hair cells. The newly emerged stereocilia bundles of HCLCs were functional, i.e., able to respond to mechanical stimulation with mechanotransduction (MET) current. Transmission electron microscopy (TEM) showed that HCLCs converted from pillar cells maintained the pillar cell shape and that subsurface cisternae, normally observed underneath the cytoskeleton in outer hair cells (OHCs), was not present in Deiters' cells-derived HCLCs. Voltage-clamp recordings showed that whole-cell currents from Deiters' cells-derived HCLCs retained the same kinetics and magnitude seen in normal Deiters' cells and that nonlinear capacitance (NLC), an electrical hallmark of OHC electromotility, was not detected from any HCLCs measured. Taken together, these results suggest that while Notch inhibition is sufficient for promoting stereocilia bundle formation, it is insufficient to convert neonatal supporting cells to mature hair cells. The fact that Notch inhibition led to stereocilia formation in supporting cells and supernumerary stereocilia in existing hair cells appears to suggest that Notch signaling may regulate stereocilia formation and stability during development.

PMID: 29662441 [PubMed]



from #Audiology via ola Kala on Inoreader https://ift.tt/2qItXb8
via IFTTT

Hearing, self-motion perception, mobility, and aging.

Related Articles

Hearing, self-motion perception, mobility, and aging.

Hear Res. 2018 Mar 31;:

Authors: Campos J, Ramkhalawansingh R, Pichora-Fuller MK

Abstract
Hearing helps us know where we are relative to important events and objects in our environment and it allows us to track our changing position dynamically over space and time. Auditory cues are used in combination with other sensory inputs (visual, vestibular, proprioceptive) to help us perceive our own movements through space, known as self-motion perception. Whether we are maintaining standing balance, walking, or driving, audition can provide unique and important information to help optimize self-motion perception, and consequently to support safe mobility. Recent epidemiological and experimental studies have provided evidence that hearing loss is associated with greater walking difficulties, poorer overall physical functioning, and a significantly increased risk of falling in older adults. Importantly, the mechanisms underlying the associations between hearing status and mobility are poorly understood. It is also critical to consider that age-related hearing loss is often concomitant with declines in other sensory, motor, and cognitive functions and that these declines may interact, particularly during realistic, everyday tasks. Overall, exploring the role of auditory cues and the effects of hearing loss on self-motion perception specifically, and mobility more generally, are important to both building fundamental knowledge about the perceptual processes underlying the ability to perceive our movements through space, as well as to optimizing mobility-related interventions for those with hearing loss so that they can function better when confronted by everyday, real-world, sensory-motor challenges. The goal of this paper is to explore the role of hearing in self-motion perception across a range of mobility-related behaviors. First, we briefly review the ways in which auditory cues are used to perceive self-motion and how sound inputs affect behaviors such as standing balance, walking, and driving. Next, we consider age-related changes in auditory self-motion perception and the potential consequences to performance on mobility-related tasks. We then describe how hearing loss is associated with declines in mobility-related abilities and increased adverse outcomes such as falls. We describe age-related changes to other sensory and cognitive functions and how these may interact with hearing loss in ways that affect mobility. Finally, we briefly consider the implications of the hearing-mobility associations with respect to applied domains such as screening for mobility problems and falls risk in those with hearing loss and developing interventions and training approaches targeting safe and independent mobility throughout the lifespan.

PMID: 29661612 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2H7Kgtj
via IFTTT

A Novel GJB2 compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) causes sensorineural hearing loss in a Chinese family.

A Novel GJB2 compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) causes sensorineural hearing loss in a Chinese family.

J Clin Lab Anal. 2018 Apr 17;:e22444

Authors: Shi X, Zhang Y, Qiu S, Zhuang W, Yuan N, Sun T, Gao J, Qiao Y, Liu K

Abstract
OBJECTIVE: To investigate whether a novel compound heterozygous mutations c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) in GJB2 result in hearing loss.
METHODS: Allele-specific PCR-based universal array (ASPUA) screening and sequence analysis were applied to identify these mutations. 3D model was built to perform molecular dynamics (MD) simulation to verify the susceptibility of the mutations. Furthermore, WT- and Mut-GJB2 DNA fragments, containing the mutation of c.257C>G and c.176del16 were respectively cloned and transfected into HEK293 and spiral ganglion neuron cell (SGNs) by lenti-virus delivery system to indicate the subcellular localization of the WT- and Mut-CX26 protein.
RESULTS: A novel compound heterozygous mutation c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) in GJB2 was identified in a Chinese family, in which 4 siblings with profound hearing loss, but the fifth child is normal. By ASPUA screening and sequencing, a compound heterozygote mutations in GJB2 c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) were identified in these four deaf children, each of the mutated GJB2 gene were inherited from their parents. There is no mutation of GJB2 gene identified in the normal child. Besides, the compound heterozygous mutation GJB2 c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) could lead to the alterations of the subcellular localization of each corresponding mutated CX26 protein and could cause the hearing loss, which has been predicted by MD simulation and verified in both 293T and SGNs cell line.
CONCLUSION: The c.257C>G (p.T86R)/c.176del16 (p.G59A fs*18) compound mutations in GJB2 detected in this study are novel, and which may be associated with hearing loss in this Chinese family.

PMID: 29665173 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2H8Kf8p
via IFTTT

Characterization of Hair Cell-Like Cells Converted From Supporting Cells After Notch Inhibition in Cultures of the Organ of Corti From Neonatal Gerbils.

Related Articles

Characterization of Hair Cell-Like Cells Converted From Supporting Cells After Notch Inhibition in Cultures of the Organ of Corti From Neonatal Gerbils.

Front Cell Neurosci. 2018;12:73

Authors: Li Y, Jia S, Liu H, Tateya T, Guo W, Yang S, Beisel KW, He DZZ

Abstract
The senses of hearing and balance depend upon hair cells, the sensory receptors of the inner ear. Hair cells transduce mechanical stimuli into electrical activity. Loss of hair cells as a result of aging or exposure to noise and ototoxic drugs is the major cause of noncongenital hearing and balance deficits. In the ear of non-mammals, lost hair cells can spontaneously be replaced by production of new hair cells from conversion of supporting cells. Although supporting cells in adult mammals have lost that capability, neonatal supporting cells are able to convert to hair cells after inhibition of Notch signaling. We questioned whether Notch inhibition is sufficient to convert supporting cells to functional hair cells using electrophysiology and electron microscopy. We showed that pharmacological inhibition of the canonical Notch pathway in the cultured organ of Corti prepared from neonatal gerbils induced stereocilia formation in supporting cells (defined as hair cell-like cells or HCLCs) and supernumerary stereocilia in hair cells. The newly emerged stereocilia bundles of HCLCs were functional, i.e., able to respond to mechanical stimulation with mechanotransduction (MET) current. Transmission electron microscopy (TEM) showed that HCLCs converted from pillar cells maintained the pillar cell shape and that subsurface cisternae, normally observed underneath the cytoskeleton in outer hair cells (OHCs), was not present in Deiters' cells-derived HCLCs. Voltage-clamp recordings showed that whole-cell currents from Deiters' cells-derived HCLCs retained the same kinetics and magnitude seen in normal Deiters' cells and that nonlinear capacitance (NLC), an electrical hallmark of OHC electromotility, was not detected from any HCLCs measured. Taken together, these results suggest that while Notch inhibition is sufficient for promoting stereocilia bundle formation, it is insufficient to convert neonatal supporting cells to mature hair cells. The fact that Notch inhibition led to stereocilia formation in supporting cells and supernumerary stereocilia in existing hair cells appears to suggest that Notch signaling may regulate stereocilia formation and stability during development.

PMID: 29662441 [PubMed]



from #Audiology via ola Kala on Inoreader https://ift.tt/2qItXb8
via IFTTT

Hearing, self-motion perception, mobility, and aging.

Related Articles

Hearing, self-motion perception, mobility, and aging.

Hear Res. 2018 Mar 31;:

Authors: Campos J, Ramkhalawansingh R, Pichora-Fuller MK

Abstract
Hearing helps us know where we are relative to important events and objects in our environment and it allows us to track our changing position dynamically over space and time. Auditory cues are used in combination with other sensory inputs (visual, vestibular, proprioceptive) to help us perceive our own movements through space, known as self-motion perception. Whether we are maintaining standing balance, walking, or driving, audition can provide unique and important information to help optimize self-motion perception, and consequently to support safe mobility. Recent epidemiological and experimental studies have provided evidence that hearing loss is associated with greater walking difficulties, poorer overall physical functioning, and a significantly increased risk of falling in older adults. Importantly, the mechanisms underlying the associations between hearing status and mobility are poorly understood. It is also critical to consider that age-related hearing loss is often concomitant with declines in other sensory, motor, and cognitive functions and that these declines may interact, particularly during realistic, everyday tasks. Overall, exploring the role of auditory cues and the effects of hearing loss on self-motion perception specifically, and mobility more generally, are important to both building fundamental knowledge about the perceptual processes underlying the ability to perceive our movements through space, as well as to optimizing mobility-related interventions for those with hearing loss so that they can function better when confronted by everyday, real-world, sensory-motor challenges. The goal of this paper is to explore the role of hearing in self-motion perception across a range of mobility-related behaviors. First, we briefly review the ways in which auditory cues are used to perceive self-motion and how sound inputs affect behaviors such as standing balance, walking, and driving. Next, we consider age-related changes in auditory self-motion perception and the potential consequences to performance on mobility-related tasks. We then describe how hearing loss is associated with declines in mobility-related abilities and increased adverse outcomes such as falls. We describe age-related changes to other sensory and cognitive functions and how these may interact with hearing loss in ways that affect mobility. Finally, we briefly consider the implications of the hearing-mobility associations with respect to applied domains such as screening for mobility problems and falls risk in those with hearing loss and developing interventions and training approaches targeting safe and independent mobility throughout the lifespan.

PMID: 29661612 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2H7Kgtj
via IFTTT

Short and Long-Term Outcomes of Titanium Clip Ossiculoplasty

Objective: To report short (∼4 mo) and long-term (>12 mo) audiometric outcomes following ossiculoplasty using a titanium clip partial ossicular reconstruction prosthesis. Methods: Case series at a single tertiary referral center reviewing 130 pediatric and adult patients with conductive hearing loss (CHL) secondary to chronic otitis media (n = 121, 93%) or traumatic ossicular disruption (n = 9, 7%) who underwent partial ossiculoplasty from January 2005 to December 2015 with the CliP prosthesis. Results: At both short and long-term follow-up, postoperative air-bone gap (ABG) was significantly improved (18 dB HL, IQ range 13–26, p 

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2vnOesf
via IFTTT

Active Transcutaneous Bone Conduction Implant: Middle Fossa Placement Technique in Children With Bilateral Microtia and External Auditory Canal Atresia

Aim: The aim of this study is to present the middle fossa technique (MFT) as an alternative for patients who cannot undergo traditional surgery for active transcutaneous bone conduction implants (ATBCI) due to their altered anatomy or desire for future aesthetic reconstruction. Design: A case series descriptive study was designed. The MFT was developed. Preoperative and postoperative information from 24 patients with external auditory canal atresia (EACA) and implanted with ATBCI was reviewed. Results: A total of 24 children with bilateral EACA received implants in the middle cranial fossa. Their average age was 12. Of these patients, eight had an associated congenital disorder: Goldenhar Syndrome, Treacher Collins Syndrome or the Pierre Robin Sequence. The average follow-up was at 17 months (ranging from between 2– and 36 mo) and there were no major complications. Four patients showed skin erythema at the processor site after turn on, which was solved by lowering the magnet strength. One patient had a scalp hematoma that required puncture drainage. The hearing thresholds went down on average from 66.5 to 25.2 dB 1 month after turn on. Speech recognition improved respectively from 29.4% without and 78.9% with a bone conduction hearing aid to 96.4%. Conclusion: MFT placement of the ATBCI was proven to be safe and effective and a viable option for treating pediatric patients with EACA who cannot receive implants at the sinodural angle or in the retrosigmoidal position because of their altered anatomy and/or desire for future aesthetic reconstruction. Address correspondence and reprint requests to Carolina Der, M.D., Ph.D., Otorhinolaryngology Department, Hospital Luis Calvo Mackenna, Antonio Varas #360 Avenue, Providencia, Santiago 7500539, Chile; E-mail: cdercder@gmail.com The source of funding for the implant program is the National School and Scholarship Assistance Council (JUNAEB for its acronym in Spanish). The authors disclose no conflicts of interest. Copyright © 2018 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2JTeZbb
via IFTTT

Vestibular Manifestations in Subjects With Enlarged Vestibular Aqueduct

Objective: To describe the results of a thorough evaluation in a large series of patients with an enlarged vestibular aqueduct (EVA), focusing on vestibular manifestations with etiological considerations. Study Design: Retrospective chart review of patients with EVA. Setting: Tertiary referral center. Patients: A total of 22 EVA patients with a median age of 8 years (6 mo–35 yr) who underwent both audiovestibular and radiologic examinations. Main Outcome Measures: Patient demographics, radiologic findings, audiologic results, vestibular symptoms, findings of neurotologic examinations, and laboratory evaluations were collected and analyzed. Standard descriptive statistics were used to summarize patient characteristics. Subjects who had a history of vertigo attack were categorized as “vestibulopathy group,” while subjects without any history of vertigo as “non-vestibulopathy group.” Results: Of the 41 ears included, 37 (90.2%) had hearing loss on initial audiometric evaluations. Among the 22 patients, 14 (63.6%) complained of dizziness. Of the 14 vertiginous patients, seven had recurrent episodes, five had a history of single attack, and two presented with postural imbalances. There were no significant differences between vestibulopathy and non-vestibulopathy groups with regard to the relationship between the development of vestibular symptoms and aqueductal size, hearing threshold, or age at first visit. Four of the 22 (18.2%) patients developed secondary benign paroxysmal positional vertigo (BPPV) and all patients complained of simultaneous decreases in hearing. Conclusions: Our results demonstrate that patients may develop vestibular symptoms during their clinical course, and all patients with an enlarged vestibular aqueduct should be cautioned regarding the potential development of vestibular pathology. Moreover, the non-negligible incidence of secondary BPPV mandates positional tests when evaluating EVA patients with vertigo. Address correspondence and reprint requests to Ja-Won Koo, M.D., Ph.D., Department of Otorhinolaryngology, Seoul National University Bundang Hospital, 82, Gumi-ro, 173 Beon-gil, Bundang-Gu, Gyeonggi-Do, 13620, Republic of Korea; E-mail: jwkoo99@snu.ac.kr This work was supported by the SNUBH research fund (No. 02–2014–037) and National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2016R1C1B2007911). The authors disclose no conflicts of interest. Copyright © 2018 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2EVwaoT
via IFTTT

Long-term Hearing Outcome of Canaloplasty With Partial Ossicular Replacement in Congenital Aural Atresia

Objective: The aim of this study was to correlate the postoperative hearing outcomes with regard to the length of prosthesis of the partial ossicular replacement prosthesis (PORP) in patients with congenital aural atresia. Study Design: Retrospective review of medical records. Setting, Patients, Intervention, Main Outcome Measure: The medical records of 131 patients (132 ears) who underwent canaloplasty with PORP by a single surgeon from 2011 to 2016 were reviewed for demographic data, Jahrsdoerfer score, grade of microtia, length of prosthesis, and audiometric outcomes. Air conduction, bone conduction threshold, and air-bone gap were measured preoperatively and at 3-, 6-, 12-, and 24-months of follow-up. Patients were divided into two groups according to the postoperative hearing outcomes, and the length of PORP was compared between the two groups. Univariable and multivariable generalized estimating equations were used to investigate other favorable prognostic factors for long-term postoperative hearing results. Results: When the improvement of the air-bone gap within 30 dB was defined as successful hearing outcome, no significant differences were observed for prosthesis length between two groups at 3, 6, and 12 months postoperatively. However, at 2-year follow-up, mean length of prosthesis was significantly shorter (p = 0.006) for the success group (2.30 ± 0.53 mm) than for the nonsuccess group (2.77 ± 0.73 mm). Generalized estimating equations revealed PORP length as the only factor significantly associated with favorable long-term hearing results. Conclusion: Long-term hearing outcome of canaloplasty with PORP is likely to be affected by prosthesis length. For that reason, making the neo-annulus as medial as possible to shorten the length of the appropriate prosthesis is important for successful long-term hearing outcomes. Address correspondence and reprint requests to Yang-Sun Cho, M.D., Ph.D., Department of Otorhinolaryngology—Head and Neck Surgery, Sungkyunkwan University School of Medicine, Samsung Medical Center, 81 Irwon-ro, Gangnam-gu, 06351 Seoul, Korea; E-mail: yscho@skku.edu The authors disclose no conflicts of interest. Copyright © 2018 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company

from #Audiology via xlomafota13 on Inoreader https://ift.tt/2JTeHB7
via IFTTT