Τρίτη 4 Δεκεμβρίου 2018

An Eye-Tracking Study of Receptive Verb Knowledge in Toddlers

Purpose
We examined receptive verb knowledge in 22- to 24-month-old toddlers with a dynamic video eye-tracking test. The primary goal of the study was to examine the utility of eye-gaze measures that are commonly used to study noun knowledge for studying verb knowledge.
Method
Forty typically developing toddlers participated. They viewed 2 videos side by side (e.g., girl clapping, same girl stretching) and were asked to find one of them (e.g., “Where is she clapping?”). Their eye-gaze, recorded by a Tobii T60XL eye-tracking system, was analyzed as a measure of their knowledge of the verb meanings. Noun trials were included as controls. We examined correlations between eye-gaze measures and score on the MacArthur–Bates Communicative Development Inventories (CDI; Fenson et al., 1994), a standard parent report measure of expressive vocabulary to see how well various eye-gaze measures predicted CDI score.
Results
A common measure of knowledge—a 15% increase in looking time to the target video from a baseline phase to the test phase—did correlate with CDI score but operationalized differently for verbs than for nouns. A 2nd common measure, latency of 1st look to the target, correlated with CDI score for nouns, as in previous work, but did not for verbs. A 3rd measure, fixation density, correlated for both nouns and verbs, although the correlation went in different directions.
Conclusions
The dynamic nature of videos depicting verb knowledge results in differences in eye-gaze as compared to static images depicting nouns. An eye-tracking assessment of verb knowledge is worthwhile to develop. However, the particular dependent measures used may be different than those used for static images and nouns.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rkGZfG
via IFTTT

Modifying and Validating a Measure of Chronic Stress for People With Aphasia

Purpose
Chronic stress is likely a common experience among people with the language impairment of aphasia. Importantly, chronic stress reportedly alters the neural networks central to learning and memory—essential ingredients of aphasia rehabilitation. Before we can explore the influence of chronic stress on rehabilitation outcomes, we must be able to measure chronic stress in this population. The purpose of this study was to (a) modify a widely used measure of chronic stress (Perceived Stress Scale [PSS]; Cohen & Janicki-Deverts, 2012) to fit the communication needs of people with aphasia (PWA) and (b) validate the modified PSS (mPSS) with PWA.
Method
Following systematic modification of the PSS (with permission), 72 PWA completed the validation portion of the study. Each participant completed the mPSS, measures of depression, anxiety, and resilience, and provided a sample of the stress hormone cortisol extracted from the hair. Pearson's product–moment correlations were used to examine associations between mPSS scores and these measures. Approximately 30% of participants completed the mPSS 1 week later to establish test–retest reliability, analyzed using an interclass correlation coefficient.
Results
Significant positive correlations were evident between the reports of chronic stress and depression and anxiety. In addition, a significant inverse correlation was found between reports of chronic stress and resilience. The mPSS also showed evidence of test–retest reliability. No association was found between mPSS score and cortisol level.
Conclusion
Although questions remain about the biological correlates of chronic stress in people with poststroke aphasia, significant associations between chronic stress and several psychosocial variables provide evidence of validity of this emerging measure of chronic stress.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G005SG
via IFTTT

Frequencies in Perception and Production Differentially Affect Child Speech

Purpose
Frequent sounds and frequent words are both acquired at an earlier age and are produced by children more accurately. Recent research suggests that frequency is not always a facilitative concept, however. Interactions between input frequency in perception and practice frequency in production may limit or inhibit growth. In this study, we consider how a range of input frequencies affect production accuracy and referent identification.
Method
Thirty-three typically developing 3- and 4-year-olds participated in a novel word-learning task. In the initial test block, participants heard nonwords 1, 3, 6, or 10 times—produced either by a single talker or by multiple talkers—and then produced them immediately. In a posttest, participants heard all nonwords just once and then produced them. Referent identification was probed in between the test and posttest.
Results
Production accuracy was most clearly facilitated by an input frequency of 3 during the test block. Input frequency interacted with production practice, and the facilitative effect of input frequency did not carry over to the posttest. Talker variability did not affect accuracy, regardless of input frequency. The referent identification results did not favor talker variability or a particular input frequency value, but participants were able to learn the words at better than chance levels.
Conclusions
The results confirm that the input can be facilitative, but input frequency and production practice interact in ways that limit input-based learning, and more input is not always better. Future research on this interaction may allow clinicians to optimize various types of frequency commonly used during therapy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rn6Sf0
via IFTTT

Individualized Patient Vocal Priorities for Tailored Therapy

Purpose
The purposes of this study are to introduce the concept of vocal priorities based on acoustic correlates, to develop an instrument to determine these vocal priorities, and to analyze the pattern of vocal priorities in patients with voice disorders.
Method
Questions probing the importance of 5 vocal attributes (vocal clarity, loudness, mean speaking pitch, pitch range, vocal endurance) were generated from consensus conference involving speech-language pathologists, laryngologists, and voice scientists, as well as patient feedback. The responses to the preliminary items from 213 subjects were subjected to exploratory factor analysis, which confirmed 4 of the predefined domains. The final instrument consisted of a 16-item Vocal Priority Questionnaire probing the relative importance of clarity, loudness, mean speaking pitch, and pitch range.
Results
The Vocal Priority Questionnaire had high reliability (Cronbach's α = .824) and good construct validity. A majority of the cohort (61%) ranked vocal clarity as their highest vocal priority, and 20%, 12%, and 7% ranked loudness, mean speaking pitch, and pitch range, respectively, as their highest priority. The frequencies of the highest ranked priorities did not differ by voice diagnosis or by sex. Considerable individual variation in vocal priorities existed within these large trends.
Conclusions
A patient's vocal priorities can be identified and taken into consideration in planning behavioral or surgical intervention for a voice disorder. Inclusion of vocal priorities in treatment planning empowers the patient in shared decision making, helps the clinician tailor treatment, and may also improve therapy compliance.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4QfG
via IFTTT

The Effects of Static and Moving Spectral Ripple Sensitivity on Unaided and Aided Speech Perception in Noise

Purpose
This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes.
Method
A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory–visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise.
Results
A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding.
Conclusions
The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rgXycC
via IFTTT

The Relationship Between Non-Orthographic Language Abilities and Reading Performance in Chronic Aphasia: An Exploration of the Primary Systems Hypothesis

Purpose
This study investigated the relationship between non-orthographic language abilities and reading in order to examine assumptions of the primary systems hypothesis and further our understanding of language processing poststroke.
Method
Performance on non-orthographic semantic, phonologic, and syntactic tasks, as well as oral reading and reading comprehension tasks, was assessed in 43 individuals with aphasia. Correlation and regression analyses were conducted to determine the relationship between these measures. In addition, analyses of variance examined differences within and between reading groups (within normal limits, phonological, deep, or global alexia).
Results
Results showed that non-orthographic language abilities were significantly related to reading abilities. Semantics was most predictive of regular and irregular word reading, whereas phonology was most predictive of pseudohomophone and nonword reading. Written word and paragraph comprehension were primarily supported by semantics, whereas written sentence comprehension was related to semantic, phonologic, and syntactic performance. Finally, severity of alexia was found to reflect severity of semantic and phonologic impairment.
Conclusions
Findings support the primary systems view of language by showing that non-orthographic language abilities and reading abilities are closely linked. This preliminary work requires replication and extension; however, current results highlight the importance of routine, integrated assessment and treatment of spoken and written language in aphasia.
Supplemental Material
https://doi.org/10.23641/asha.7403963

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4LZq
via IFTTT

Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech

Purpose
Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input.
Method
Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions). Performance was analyzed not only in terms of traditional mean RTs but also in terms of the faster versus slower RTs (defined by the 1st vs. 3rd quartiles of RT distributions). These time regions were conceptualized respectively as reflecting optimal detection with efficient focused attention versus less optimal detection with inefficient focused attention due to attentional lapses.
Results
Mean RTs indicated better detection (a) of multisensory AV speech than A speech only in 4- to 5-year-olds and (b) of A and AV inputs than V input in all age groups. The faster RTs revealed that AV input did not improve detection in any group. The slower RTs indicated that (a) the processing of silent V input was significantly faster for the articulating than static face and (b) AV speech or facial input significantly minimized attentional lapses in all groups except 6- to 7-year-olds (a peaked U-shaped curve). Apparently, the AV benefit observed for mean performance in 4- to 5-year-olds arose from effects of attention.
Conclusions
The faster RTs indicated that AV input did not enhance detection in any group, but the slower RTs indicated that AV speech and dynamic V speech (mouthing) significantly minimized attentional lapses and thus did influence performance. Overall, A and AV inputs were detected consistently faster than V input; this result endorsed stimulus-bound auditory processing by these children.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rk5Y2H
via IFTTT

Language Skill Mediates the Relationship Between Language Load and Articulatory Variability in Children With Language and Speech Sound Disorders

Purpose
The aim of the study was to investigate the relationship between language load and articulatory variability in children with language and speech sound disorders, including childhood apraxia of speech.
Method
Forty-six children, ages 48–92 months, participated in the current study, including children with speech sound disorder, developmental language disorder (aka specific language impairment), childhood apraxia of speech, and typical development. Children imitated (low language load task) then retrieved (high language load task) agent + action phrases. Articulatory variability was quantified using speech kinematics. We assessed language status and speech status (typical vs. impaired) in relation to articulatory variability.
Results
All children showed increased articulatory variability in the retrieval task compared with the imitation task. However, only children with language impairment showed a disproportionate increase in articulatory variability in the retrieval task relative to peers with typical language skills.
Conclusion
Higher-level language processes affect lower-level speech motor control processes, and this relationship appears to be more strongly mediated by language than speech skill.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G1UThf
via IFTTT

Basic Measures of Prosody in Spontaneous Speech of Children With Early and Late Cochlear Implantation

Purpose
Relative to normally hearing (NH) peers, the speech of children with cochlear implants (CIs) has been found to have deviations such as a high fundamental frequency, elevated jitter and shimmer, and inadequate intonation. However, two important dimensions of prosody (temporal and spectral) have not been systematically investigated. Given that, in general, the resolution in CI hearing is best for the temporal dimension and worst for the spectral dimension, we expected this hierarchy to be reflected in the amount of CI speech's deviation from NH speech. Deviations, however, were expected to diminish with increasing device experience.
Method
Of 9 Dutch early- and late-implanted (division at 2 years of age) children and 12 hearing age-matched NH controls, spontaneous speech was recorded at 18, 24, and 30 months after implantation (CI) or birth (NH). Six spectral and temporal outcome measures were compared between groups, sessions, and genders.
Results
On most measures, interactions of Group and/or Gender with Session were significant. For CI recipients as compared with controls, performance on temporal measures was not in general more deviant than spectral measures, although differences were found for individual measures. The late-implanted group had a tendency to be closer to the NH group than the early-implanted group. Groups converged over time.
Conclusions
Results did not support the phonetic dimension hierarchy hypothesis, suggesting that the appropriateness of the production of basic prosodic measures does not depend on auditory resolution. Rather, it seems to depend on the amount of control necessary for speech production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rjDqqj
via IFTTT

An Eye-Tracking Study of Receptive Verb Knowledge in Toddlers

Purpose
We examined receptive verb knowledge in 22- to 24-month-old toddlers with a dynamic video eye-tracking test. The primary goal of the study was to examine the utility of eye-gaze measures that are commonly used to study noun knowledge for studying verb knowledge.
Method
Forty typically developing toddlers participated. They viewed 2 videos side by side (e.g., girl clapping, same girl stretching) and were asked to find one of them (e.g., “Where is she clapping?”). Their eye-gaze, recorded by a Tobii T60XL eye-tracking system, was analyzed as a measure of their knowledge of the verb meanings. Noun trials were included as controls. We examined correlations between eye-gaze measures and score on the MacArthur–Bates Communicative Development Inventories (CDI; Fenson et al., 1994), a standard parent report measure of expressive vocabulary to see how well various eye-gaze measures predicted CDI score.
Results
A common measure of knowledge—a 15% increase in looking time to the target video from a baseline phase to the test phase—did correlate with CDI score but operationalized differently for verbs than for nouns. A 2nd common measure, latency of 1st look to the target, correlated with CDI score for nouns, as in previous work, but did not for verbs. A 3rd measure, fixation density, correlated for both nouns and verbs, although the correlation went in different directions.
Conclusions
The dynamic nature of videos depicting verb knowledge results in differences in eye-gaze as compared to static images depicting nouns. An eye-tracking assessment of verb knowledge is worthwhile to develop. However, the particular dependent measures used may be different than those used for static images and nouns.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rkGZfG
via IFTTT

Modifying and Validating a Measure of Chronic Stress for People With Aphasia

Purpose
Chronic stress is likely a common experience among people with the language impairment of aphasia. Importantly, chronic stress reportedly alters the neural networks central to learning and memory—essential ingredients of aphasia rehabilitation. Before we can explore the influence of chronic stress on rehabilitation outcomes, we must be able to measure chronic stress in this population. The purpose of this study was to (a) modify a widely used measure of chronic stress (Perceived Stress Scale [PSS]; Cohen & Janicki-Deverts, 2012) to fit the communication needs of people with aphasia (PWA) and (b) validate the modified PSS (mPSS) with PWA.
Method
Following systematic modification of the PSS (with permission), 72 PWA completed the validation portion of the study. Each participant completed the mPSS, measures of depression, anxiety, and resilience, and provided a sample of the stress hormone cortisol extracted from the hair. Pearson's product–moment correlations were used to examine associations between mPSS scores and these measures. Approximately 30% of participants completed the mPSS 1 week later to establish test–retest reliability, analyzed using an interclass correlation coefficient.
Results
Significant positive correlations were evident between the reports of chronic stress and depression and anxiety. In addition, a significant inverse correlation was found between reports of chronic stress and resilience. The mPSS also showed evidence of test–retest reliability. No association was found between mPSS score and cortisol level.
Conclusion
Although questions remain about the biological correlates of chronic stress in people with poststroke aphasia, significant associations between chronic stress and several psychosocial variables provide evidence of validity of this emerging measure of chronic stress.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G005SG
via IFTTT

Frequencies in Perception and Production Differentially Affect Child Speech

Purpose
Frequent sounds and frequent words are both acquired at an earlier age and are produced by children more accurately. Recent research suggests that frequency is not always a facilitative concept, however. Interactions between input frequency in perception and practice frequency in production may limit or inhibit growth. In this study, we consider how a range of input frequencies affect production accuracy and referent identification.
Method
Thirty-three typically developing 3- and 4-year-olds participated in a novel word-learning task. In the initial test block, participants heard nonwords 1, 3, 6, or 10 times—produced either by a single talker or by multiple talkers—and then produced them immediately. In a posttest, participants heard all nonwords just once and then produced them. Referent identification was probed in between the test and posttest.
Results
Production accuracy was most clearly facilitated by an input frequency of 3 during the test block. Input frequency interacted with production practice, and the facilitative effect of input frequency did not carry over to the posttest. Talker variability did not affect accuracy, regardless of input frequency. The referent identification results did not favor talker variability or a particular input frequency value, but participants were able to learn the words at better than chance levels.
Conclusions
The results confirm that the input can be facilitative, but input frequency and production practice interact in ways that limit input-based learning, and more input is not always better. Future research on this interaction may allow clinicians to optimize various types of frequency commonly used during therapy.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rn6Sf0
via IFTTT

Individualized Patient Vocal Priorities for Tailored Therapy

Purpose
The purposes of this study are to introduce the concept of vocal priorities based on acoustic correlates, to develop an instrument to determine these vocal priorities, and to analyze the pattern of vocal priorities in patients with voice disorders.
Method
Questions probing the importance of 5 vocal attributes (vocal clarity, loudness, mean speaking pitch, pitch range, vocal endurance) were generated from consensus conference involving speech-language pathologists, laryngologists, and voice scientists, as well as patient feedback. The responses to the preliminary items from 213 subjects were subjected to exploratory factor analysis, which confirmed 4 of the predefined domains. The final instrument consisted of a 16-item Vocal Priority Questionnaire probing the relative importance of clarity, loudness, mean speaking pitch, and pitch range.
Results
The Vocal Priority Questionnaire had high reliability (Cronbach's α = .824) and good construct validity. A majority of the cohort (61%) ranked vocal clarity as their highest vocal priority, and 20%, 12%, and 7% ranked loudness, mean speaking pitch, and pitch range, respectively, as their highest priority. The frequencies of the highest ranked priorities did not differ by voice diagnosis or by sex. Considerable individual variation in vocal priorities existed within these large trends.
Conclusions
A patient's vocal priorities can be identified and taken into consideration in planning behavioral or surgical intervention for a voice disorder. Inclusion of vocal priorities in treatment planning empowers the patient in shared decision making, helps the clinician tailor treatment, and may also improve therapy compliance.

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4QfG
via IFTTT

The Effects of Static and Moving Spectral Ripple Sensitivity on Unaided and Aided Speech Perception in Noise

Purpose
This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes.
Method
A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory–visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise.
Results
A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding.
Conclusions
The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rgXycC
via IFTTT

The Relationship Between Non-Orthographic Language Abilities and Reading Performance in Chronic Aphasia: An Exploration of the Primary Systems Hypothesis

Purpose
This study investigated the relationship between non-orthographic language abilities and reading in order to examine assumptions of the primary systems hypothesis and further our understanding of language processing poststroke.
Method
Performance on non-orthographic semantic, phonologic, and syntactic tasks, as well as oral reading and reading comprehension tasks, was assessed in 43 individuals with aphasia. Correlation and regression analyses were conducted to determine the relationship between these measures. In addition, analyses of variance examined differences within and between reading groups (within normal limits, phonological, deep, or global alexia).
Results
Results showed that non-orthographic language abilities were significantly related to reading abilities. Semantics was most predictive of regular and irregular word reading, whereas phonology was most predictive of pseudohomophone and nonword reading. Written word and paragraph comprehension were primarily supported by semantics, whereas written sentence comprehension was related to semantic, phonologic, and syntactic performance. Finally, severity of alexia was found to reflect severity of semantic and phonologic impairment.
Conclusions
Findings support the primary systems view of language by showing that non-orthographic language abilities and reading abilities are closely linked. This preliminary work requires replication and extension; however, current results highlight the importance of routine, integrated assessment and treatment of spoken and written language in aphasia.
Supplemental Material
https://doi.org/10.23641/asha.7403963

from #Audiology via ola Kala on Inoreader https://ift.tt/2FX4LZq
via IFTTT

Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech

Purpose
Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input.
Method
Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions). Performance was analyzed not only in terms of traditional mean RTs but also in terms of the faster versus slower RTs (defined by the 1st vs. 3rd quartiles of RT distributions). These time regions were conceptualized respectively as reflecting optimal detection with efficient focused attention versus less optimal detection with inefficient focused attention due to attentional lapses.
Results
Mean RTs indicated better detection (a) of multisensory AV speech than A speech only in 4- to 5-year-olds and (b) of A and AV inputs than V input in all age groups. The faster RTs revealed that AV input did not improve detection in any group. The slower RTs indicated that (a) the processing of silent V input was significantly faster for the articulating than static face and (b) AV speech or facial input significantly minimized attentional lapses in all groups except 6- to 7-year-olds (a peaked U-shaped curve). Apparently, the AV benefit observed for mean performance in 4- to 5-year-olds arose from effects of attention.
Conclusions
The faster RTs indicated that AV input did not enhance detection in any group, but the slower RTs indicated that AV speech and dynamic V speech (mouthing) significantly minimized attentional lapses and thus did influence performance. Overall, A and AV inputs were detected consistently faster than V input; this result endorsed stimulus-bound auditory processing by these children.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rk5Y2H
via IFTTT

Language Skill Mediates the Relationship Between Language Load and Articulatory Variability in Children With Language and Speech Sound Disorders

Purpose
The aim of the study was to investigate the relationship between language load and articulatory variability in children with language and speech sound disorders, including childhood apraxia of speech.
Method
Forty-six children, ages 48–92 months, participated in the current study, including children with speech sound disorder, developmental language disorder (aka specific language impairment), childhood apraxia of speech, and typical development. Children imitated (low language load task) then retrieved (high language load task) agent + action phrases. Articulatory variability was quantified using speech kinematics. We assessed language status and speech status (typical vs. impaired) in relation to articulatory variability.
Results
All children showed increased articulatory variability in the retrieval task compared with the imitation task. However, only children with language impairment showed a disproportionate increase in articulatory variability in the retrieval task relative to peers with typical language skills.
Conclusion
Higher-level language processes affect lower-level speech motor control processes, and this relationship appears to be more strongly mediated by language than speech skill.

from #Audiology via ola Kala on Inoreader https://ift.tt/2G1UThf
via IFTTT

Basic Measures of Prosody in Spontaneous Speech of Children With Early and Late Cochlear Implantation

Purpose
Relative to normally hearing (NH) peers, the speech of children with cochlear implants (CIs) has been found to have deviations such as a high fundamental frequency, elevated jitter and shimmer, and inadequate intonation. However, two important dimensions of prosody (temporal and spectral) have not been systematically investigated. Given that, in general, the resolution in CI hearing is best for the temporal dimension and worst for the spectral dimension, we expected this hierarchy to be reflected in the amount of CI speech's deviation from NH speech. Deviations, however, were expected to diminish with increasing device experience.
Method
Of 9 Dutch early- and late-implanted (division at 2 years of age) children and 12 hearing age-matched NH controls, spontaneous speech was recorded at 18, 24, and 30 months after implantation (CI) or birth (NH). Six spectral and temporal outcome measures were compared between groups, sessions, and genders.
Results
On most measures, interactions of Group and/or Gender with Session were significant. For CI recipients as compared with controls, performance on temporal measures was not in general more deviant than spectral measures, although differences were found for individual measures. The late-implanted group had a tendency to be closer to the NH group than the early-implanted group. Groups converged over time.
Conclusions
Results did not support the phonetic dimension hierarchy hypothesis, suggesting that the appropriateness of the production of basic prosodic measures does not depend on auditory resolution. Rather, it seems to depend on the amount of control necessary for speech production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2rjDqqj
via IFTTT

A preliminary investigation into hearing aid fitting based on automated real-ear measurements integrated in the fitting software: test–retest reliability, matching accuracy and perceptual outcomes

.


from #Audiology via ola Kala on Inoreader https://ift.tt/2QdQ2xR
via IFTTT

A preliminary investigation into hearing aid fitting based on automated real-ear measurements integrated in the fitting software: test–retest reliability, matching accuracy and perceptual outcomes

.


from #Audiology via ola Kala on Inoreader https://ift.tt/2QdQ2xR
via IFTTT

Wireless binaural hearing aid technology for telephone use and listening in wind noise.

Icon for Taylor & Francis Related Articles

Wireless binaural hearing aid technology for telephone use and listening in wind noise.

Int J Audiol. 2018 Nov 24;:1-7

Authors: Au A, Blakeley JM, Dowell RC, Rance G

Abstract
OBJECTIVE: To assess the speech perception benefits of binaural streaming technology for bilateral hearing aid users in two difficult listening conditions.
DESIGN: Two studies were conducted to compare hearing aid processing features relating to telephone use and wind noise. Speech perception testing was conducted in four different experimental conditions in each study.
STUDY SAMPLE: Ten bilaterally-aided children in each study.
RESULTS: Significant improvements in speech perception were obtained with a wireless feature for telephone use. Significant speech perception benefits were also obtained with wireless hearing aid features when listening to speech in simulated wind noise.
CONCLUSIONS: Binaural signal processing algorithms can significantly improve speech perception for bilateral hearing aid users in challenging listening situations.

PMID: 30474445 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2E0bGiB
via IFTTT

Patterns in the social representation of "hearing loss" across countries: how do demographic factors influence this representation?

Icon for Taylor & Francis Related Articles

Patterns in the social representation of "hearing loss" across countries: how do demographic factors influence this representation?

Int J Audiol. 2018 Dec;57(12):925-932

Authors: Germundsson P, Manchaiah V, Ratinaud P, Tympas A, Danermark B

Abstract
This study aims to understand patterns in the social representation of hearing loss reported by adults across different countries and explore the impact of different demographic factors on response patterns. The study used a cross-sectional survey design. Data were collected using a free association task and analysed using qualitative content analysis, cluster analysis and chi-square analysis. The study sample included 404 adults (18 years and over) in the general population from four countries (India, Iran, Portugal and UK). The cluster analysis included 380 responses out of 404 (94.06%) and resulted in five clusters. The clusters were named: (1) individual aspects; (2) aetiology; (3) the surrounding society; (4) limitations and (5) exposed. Various demographic factors (age, occupation type, education and country) showed an association with different clusters, although country of origin seemed to be associated with most clusters. The study results suggest that how hearing loss is represented in adults in general population varies and is mainly related to country of origin. These findings strengthen the argument about cross-cultural differences in perception of hearing loss, which calls for a need to make necessary accommodations while developing public health strategies about hearing loss.

PMID: 30468404 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2QhBoF8
via IFTTT

The relationships among verbal ability, executive function, and theory of mind in young children with cochlear implants.

Icon for Taylor & Francis Related Articles

The relationships among verbal ability, executive function, and theory of mind in young children with cochlear implants.

Int J Audiol. 2018 Dec;57(12):875-882

Authors: Liu M, Wu L, Wu W, Li G, Cai T, Liu J

Abstract
This study aims to examine the complex relationships among verbal ability (VA), executive function (EF), and theory of mind (ToM) in young Chinese children with cochlear implants (CCI). All participants were tested using a set of nine measures: one VA, one non-VA, three EF, and four ToM. Our study cohort comprised 82 children aged from 3.8 to 6.9 years, including 36 CCI and 46 children with normal hearing (CNH). CNH outperformed CCI on measures of VA, EF, and ToM. One of the EF tasks, inhibitory control, was significantly associated with ToM after controlling for VA. VA was the primary predictor of EF, while inhibitory control significantly predicted ToM. Our findings suggest that inhibitory control explains the association between EF and ToM, thereby supporting the hypothesis that EF may be a prerequisite for ToM.

PMID: 30465454 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2Qf5ZTT
via IFTTT

A preliminary study on time-compressed speech recognition in noise among teenage students who use personal listening devices.

Icon for Taylor & Francis Related Articles

A preliminary study on time-compressed speech recognition in noise among teenage students who use personal listening devices.

Int J Audiol. 2018 Nov 15;:1-7

Authors: Li K, Xia L, Zheng Z, Liu W, Yang X, Feng Y, Zhang C

Abstract
OBJECTIVE: To compare speech perception obtained with different time compression rates in teenagers that do or do not use personal listening devices (PLDs).
DESIGN: Teenagers in a high school were recruited to complete questionnaires reporting their recreational noise exposure using PLDs. The dose of individual recreational noise exposure was calculated. The individuals with the most and least doses of recreational noise were selected and grouped into PLD users and non-PLD users. Normal rate and time-compressed (60% and 70%) speech recognition in quiet and noisy conditions was measured.
STUDY SAMPLE: PLD user and non-PLD user group each included 20 participants.
RESULTS: ANOVA analysis showed that the effect of group, background, compression rate, and interactions between any two factors are significant. Post hoc analysis showed that the speech recognition scores with normal rate in quiet and noise and those obtained from time-compressed speech in the quiet condition were not significantly different between PLD users and non-PLD users. However, differences in the time-compressed speech recognition scores (60% and 70%) in noisy conditions between the two groups were statistically significant.
CONCLUSIONS: The fast-speed speech recognition in noise decreased significantly in PLD users compared with that in non-PLD users selected by extreme entertainment exposure.

PMID: 30442062 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2ONZdQ8
via IFTTT

International journal of audiology reviewer contact information.

Icon for Taylor & Francis Related Articles

International journal of audiology reviewer contact information.

Int J Audiol. 2018 Nov 15;:1-2

Authors: Preece JP

PMID: 30442049 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2zhPcWv
via IFTTT

Wireless binaural hearing aid technology for telephone use and listening in wind noise

.


from #Audiology via ola Kala on Inoreader https://ift.tt/2BRh5q3
via IFTTT

Patterns in the social representation of “hearing loss” across countries: how do demographic factors influence this representation?

Volume 57, Issue 12, December 2018, Page 925-932
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2KR4kyB
via IFTTT

The relationships among verbal ability, executive function, and theory of mind in young children with cochlear implants

Volume 57, Issue 12, December 2018, Page 875-882
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2BPFX1y
via IFTTT

A swing arm device for the acoustic measurement of food texture.

Related Articles

A swing arm device for the acoustic measurement of food texture.

J Texture Stud. 2018 Nov 29;:

Authors: Akimoto H, Sakurai N, Blahovec J

Abstract
We developed a swing arm device for acoustic measurement of food texture, which resolved difficulties of food texture evaluation. The device has a structure of balance-style and a probe in the device is moved downward along with motion of swing arm according to the balance of weights at both ends of the swing arm. The probe was inserted into a food sample. The device measured displacement and acceleration of the probe on food fracture by probe insertion with high precision until the probe stops inserting into a food sample. Using the displacement and acceleration of the probe on fracture, we calculated three parameters to determine food texture. Energy Texture Index (ETI) which is probe kinetic energy of acoustic vibration was evaluated by the vibration on food fracture. Audible Energy Texture Index (aETI) could be introduced as food texture perceived by human sense of hearing, which was obtained by multiplying ETI by human hearing sensitivity. It was found that the ETI and aETI can be used for measurement of characteristic food texture detected at a tooth and perceived in brain, respectively. Food Friction Index (FFI) to explain the friction strength of a probe against a food sample was theoretically formulated under the condition of probe motion in the device. FFI was found to be useful not only for crispy food like biscuit but for soft food. The measured FFI indicated characteristic of smoothness of probe insertion into food sample. This article is protected by copyright. All rights reserved.

PMID: 30489633 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PdtqZb
via IFTTT

Extracochlear Stimulation of Electrically Evoked Auditory Brainstem Responses (eABRs) Remains the Preferred Pre-implant Auditory Nerve Function Test in an Assessor-blinded Comparison.

Icon for Wolters Kluwer Related Articles

Extracochlear Stimulation of Electrically Evoked Auditory Brainstem Responses (eABRs) Remains the Preferred Pre-implant Auditory Nerve Function Test in an Assessor-blinded Comparison.

Otol Neurotol. 2018 Nov 27;:

Authors: Causon A, O'Driscoll M, Stapleton E, Lloyd S, Freeman S, Munro KJ

Abstract
OBJECTIVE: Electrically evoked auditory brainstem responses (eABRs) can be recorded before cochlear implant (CI) surgery to verify auditory nerve function, and is particularly helpful in to assess the function of the auditory nerve in cases of auditory nerve hypoplasia. This is the first study to compare three preimplant eABRs recording techniques: 1) standard extracochlear, 2) novel intracochlear, and 3) conventional intracochlear with the CI.
STUDY DESIGN: A within-participants design was used where eABRs were sequentially measured during CI surgery using three methods with stimulation from: 1) an extracochlear electrode placed at the round window niche, 2) two different electrodes on a recently developed Intracochlear Test Array (ITA), and 3) two different electrodes on a CI electrode array.
SETTING: New adults implantees (n = 16) were recruited through the Manchester Auditory Implant Centre and eABR measurements were made in theater at the time of CI surgery.
PATIENTS: All participants met the clinical criteria for cochlear implantation. Only participants with radiologically normal auditory nerves were recruited to the study. All participants were surgically listed for either a MED-EL Synchrony implant or a Cochlear Nucleus Profile implant, per standard practice in the implant centre.
OUTCOME MEASURES: Primary outcome measures were: 1) charge (μC) required to elicit a threshold response, and 2) latencies (ms) in the threshold waveforms. Secondary outcome measures were: 1) morphologies of responses at suprathreshold stimulation levels and 2) wave V growth patterns.
RESULTS: eABRs were successfully measured from 15 participants. In terms of primary outcome measures, the charge required to elicit a response using the extracochlear electrode (median = 0.075 μC) was approximately six times larger than all other electrodes and the latency of wave V was approximately 0.5 ms longer when using the extracochlear electrode (mean = 5.1 ms). In terms of secondary outcomes, there were some minor quantitative differences in responses between extracochlear and intracochlear stimulation; in particular, ITA responses were highly variable in quality. The ITA responses were rated poor quality in 33% of recordings and in two instances did not allow for data collection. When not disrupted by open circuits, the median ITA response contained one more waveform than the median extracochlear response.
CONCLUSIONS: In this first study comparing intracochlear and extracochlear stimulation, the results show that both can be used to produce an eABR that is representative of the one elicited by the CI. In the majority of cases, extracochlear stimulation was the preferred approach for preimplant auditory nerve function testing because of consistency, recordings that could be analyzed, and because extracochlear placement of the electrode does not require a cochleostomy to insert an electrode.This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. https://ift.tt/OBJ4xP.

PMID: 30489452 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2zG7nW9
via IFTTT

Characterization of the Transcriptome of Hair Cell Regeneration in the Neonatal Mouse Utricle.

Icon for S. Karger AG, Basel, Switzerland Icon for S. Karger AG, Basel, Switzerland Related Articles

Characterization of the Transcriptome of Hair Cell Regeneration in the Neonatal Mouse Utricle.

Cell Physiol Biochem. 2018 Nov 28;51(3):1437-1447

Authors: Han J, Wu H, Hu H, Yang W, Dong H, Liu Y, Nie G

Abstract
BACKGROUND/AIMS: Hearing and balance deficits are mainly caused by loss of sensory inner ear hair cells. The key signals that control hair cell regeneration are of great interest. However, the molecular events by which the cellular signals mediate hair cell regeneration in the mouse utricle are largely unknown.
METHODS: In the present study, we investigated gene expression changes and related molecular pathways using RNA-seq and qRT-PCR in the newborn mouse utricle in response to neomycin-induced damage.
RESULTS: There were 302 and 624 genes that were found to be up-regulated and down-regulated in neomycin-treated samples. GO and KEGG pathway analyses of these genes revealed many deregulated cellular components, molecular functions, biological processes and signaling pathways that may be related to hair cell development. More importantly, the differentially expressed genes included 9 transcription factors from the zf-C2H2 family, and eight of them were consistently down-regulated during hair cell damage and subsequent regeneration.
CONCLUSION: Our results provide a valuable source for future studies and highlighted some promising genes, pathways or processes that may be useful for therapeutic applications.

PMID: 30485845 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2FPcN6x
via IFTTT

Disorders of the inner-ear balance organs and their pathways.

Icon for Elsevier Science Related Articles

Disorders of the inner-ear balance organs and their pathways.

Handb Clin Neurol. 2018;159:385-401

Authors: Young AS, Rosengren SM, Welgampola MS

Abstract
Disorders of the inner-ear balance organs can be grouped by their manner of presentation into acute, episodic, or chronic vestibular syndromes. A sudden unilateral vestibular injury produces severe vertigo, nausea, and imbalance lasting days, known as the acute vestibular syndrome (AVS). A bedside head impulse and oculomotor examination helps separate vestibular neuritis, the more common and innocuous cause of AVS, from stroke. Benign positional vertigo, a common cause of episodic positional vertigo, occurs when otoconia overlying the otolith membrane falls into the semicircular canals, producing brief spells of spinning vertigo triggered by head movement. Benign positional vertigo is diagnosed by a positional test, which triggers paroxysmal positional nystagmus in the plane of the affected semicircular canal. Episodic spontaneous vertigo caused by vestibular migraine and Ménière's disease can sometimes prove hard to separate. Typically, Ménière's disease is associated with spinning vertigo lasting hours, aural fullness, tinnitus, and fluctuating hearing loss while VM can produce spinning, rocking, or tilting sensations and light-headedness lasting minutes to days, sometimes but not always associated with migraine headaches or photophobia. Injury to both vestibular end-organs results in ataxia and oscillopsia rather than vertigo. Head impulse testing, dynamic visual acuity, and matted Romberg tests are abnormal while conventional neurologic assessments are normal. A defect in the bony roof overlying the superior semicircular canal produces vertigo and oscillopsia provoked by loud sound and pressure (when coughing or sneezing). Three-dimensional temporal bone computed tomography scan and vestibular evoked myogenic potential testing help confirm the diagnosis of superior canal dehiscence. Collectively, these clinical syndromes account for a large proportion of dizzy and unbalanced patients.

PMID: 30482329 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2RpLoJV
via IFTTT

DNA methylation dynamics during embryonic development and postnatal maturation of the mouse auditory sensory epithelium.

Icon for Nature Publishing Group Related Articles

DNA methylation dynamics during embryonic development and postnatal maturation of the mouse auditory sensory epithelium.

Sci Rep. 2018 Nov 26;8(1):17348

Authors: Yizhar-Barnea O, Valensisi C, Jayavelu ND, Kishore K, Andrus C, Koffler-Brill T, Ushakov K, Perl K, Noy Y, Bhonker Y, Pelizzola M, Hawkins RD, Avraham KB

Abstract
The inner ear is a complex structure responsible for hearing and balance, and organ pathology is associated with deafness and balance disorders. To evaluate the role of epigenomic dynamics, we performed whole genome bisulfite sequencing at key time points during the development and maturation of the mouse inner ear sensory epithelium (SE). Our single-nucleotide resolution maps revealed variations in both general characteristics and dynamics of DNA methylation over time. This allowed us to predict the location of non-coding regulatory regions and to identify several novel candidate regulatory factors, such as Bach2, that connect stage-specific regulatory elements to molecular features that drive the development and maturation of the SE. Constructing in silico regulatory networks around sites of differential methylation enabled us to link key inner ear regulators, such as Atoh1 and Stat3, to pathways responsible for cell lineage determination and maturation, such as the Notch pathway. We also discovered that a putative enhancer, defined as a low methylated region (LMR), can upregulate the GJB6 gene and a neighboring non-coding RNA. The study of inner ear SE methylomes revealed novel regulatory regions in the hearing organ, which may improve diagnostic capabilities, and has the potential to guide the development of therapeutics for hearing loss by providing multiple intervention points for manipulation of the auditory system.

PMID: 30478432 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2Pi68kF
via IFTTT

Tonotopy in calcium homeostasis and vulnerability of cochlear hair cells.

Icon for Elsevier Science Related Articles

Tonotopy in calcium homeostasis and vulnerability of cochlear hair cells.

Hear Res. 2018 Nov 16;:

Authors: Fettiplace R, Nam JH

Abstract
Ototoxicity, noise overstimulation, or aging, can all produce hearing loss with similar properties, in which outer hair cells (OHCs), principally those at the high-frequency base of the cochlea, are preferentially affected. We suggest that the differential vulnerability may partly arise from differences in Ca2+ balance among cochlear locations. Homeostasis is determined by three factors: Ca2+ influx mainly via mechanotransducer (MET) channels; buffering by calcium-binding proteins and organelles like mitochondria; and extrusion by the plasma membrane CaATPase pump. We review quantification of these parameters and use our experimentally-determined values to model changes in cytoplasmic and mitochondrial Ca2+ during Ca2+ influx through the MET channels. We suggest that, in OHCs, there are two distinct micro-compartments for Ca2+ handling, one in the hair bundle and the other in the cell soma. One conclusion of the modeling is that there is a tonotopic gradient in the ability of OHCs to handle the Ca2+ load, which correlates with their vulnerability to environmental challenges. High-frequency basal OHCs are the most susceptible because they have much larger MET currents and have smaller dimensions than low-frequency apical OHCs.

PMID: 30473131 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2P4E4S3
via IFTTT

Internet-based interventions for adults with hearing loss, tinnitus and vestibular disorders: a protocol for a systematic review.

Icon for BioMed Central Related Articles

Internet-based interventions for adults with hearing loss, tinnitus and vestibular disorders: a protocol for a systematic review.

Syst Rev. 2018 Nov 23;7(1):205

Authors: Beukes EW, Manchaiah V, Baguley DM, Allen PM, Andersson G

Abstract
BACKGROUND: Internet-based interventions are emerging as an alternative way of delivering accessible healthcare for various conditions including hearing and balance disorders. A comprehensive review regarding the evidence-base of Internet-based interventions for auditory-related conditions is required to determine the existing evidence of their efficacy and effectiveness. The objective of the current protocol is to provide the methodology for a systematic review regarding the effects of Internet-based interventions for adults with hearing loss, tinnitus and vestibular disorders.
METHOD: This protocol was developed according to the Preferred Reporting Items for Systematic reviews and Meta-analyses for Protocols (PRISMA-P) 2015 guidelines. Electronic database searches will include EBSCOhost, PubMed and Cochrane Central Register performed by two researchers. This will be complemented by searching other resources such as the reference lists for included studies to identify studies meeting the eligibility for inclusion with regard to study designs, participants, interventions, comparators and outcomes. The Cochrane risk of bias tool (RoB 2) for randomised trials will be used for the bias assessments in the included studies. Criteria for conducting meta-analyses were defined.
DISCUSSION: The result of this systematic review will be of value to establish the effects of Internet-based interventions for hearing loss, tinnitus and vestibular disorders. This will be of importance to guide future planning of auditory intervention research and clinical services by healthcare providers, researchers, consumers and stakeholders.
SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42018094801.

PMID: 30470247 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PgUmal
via IFTTT

Beneficial Effects of Resveratrol Administration-Focus on Potential Biochemical Mechanisms in Cardiovascular Conditions.

Related Articles

Beneficial Effects of Resveratrol Administration-Focus on Potential Biochemical Mechanisms in Cardiovascular Conditions.

Nutrients. 2018 Nov 21;10(11):

Authors: Wiciński M, Socha M, Walczak M, Wódkiewicz E, Malinowski B, Rewerski S, Górski K, Pawlak-Osińska K

Abstract
Resveratrol (RV) is a natural non-flavonoid polyphenol and phytoalexin produced by a number of plants such as peanuts, grapes, red wine and berries. Numerous in vitro studies have shown promising results of resveratrol usage as antioxidant, antiplatelet or anti-inflammatory agent. Beneficial effects of resveratrol activity probably result from its ability to purify the body from ROS (reactive oxygen species), inhibition of COX (cyclooxygenase) and activation of many anti-inflammatory pathways. Administration of the polyphenol has a potential to slow down the development of CVD (cardiovascular disease) by influencing on certain risk factors such as development of diabetes or atherosclerosis. Resveratrol induced an increase in Sirtuin-1 level, which by disrupting the TLR4/NF-κB/STAT signal cascade (toll-like receptor 4/nuclear factor κ-light-chain enhancer of activated B cells/signal transducer and activator of transcription) reduces production of cytokines in activated microglia. Resveratrol caused an attenuation of macrophage/mast cell-derived pro-inflammatory factors such as PAF (platelet-activating factor), TNF-α (tumour necrosis factor-α and histamine. Endothelial and anti-oxidative effect of resveratrol may contribute to better outcomes in stroke management. By increasing BDNF (brain-derived neurotrophic factor) serum concentration and inducing NOS-3 (nitric oxide synthase-3) activity resveratrol may have possible therapeutical effects on cognitive impairments and dementias especially in those characterized by defective cerebrovascular blood flow.

PMID: 30469326 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2zG7bGp
via IFTTT

Vestibular rehabilitation: advances in peripheral and central vestibular disorders.

Icon for Wolters Kluwer Related Articles

Vestibular rehabilitation: advances in peripheral and central vestibular disorders.

Curr Opin Neurol. 2018 Nov 15;:

Authors: Dunlap PM, Holmberg JM, Whitney SL

Abstract
PURPOSE OF REVIEW: Rehabilitation for persons with vertigo and balance disorders is becoming commonplace and the literature is expanding rapidly. The present review highlights recent findings of both peripheral and central vestibular disorders and provides insight into evidence related to new rehabilitative interventions. Risk factors will be reviewed to create a better understanding of patient and clinical characteristics that may effect recovery among persons with vestibular disorders.
RECENT FINDINGS: Clinical practice guidelines have recently been developed for peripheral vestibular hypofunction and updated for benign paroxysmal positional vertigo. Diagnoses such as persistent postural-perceptual dizziness (PPPD) and vestibular migraine are now defined, and there is growing literature supporting the effectiveness of vestibular rehabilitation as a treatment option. As technology advances, virtual reality and other technologies are being used more frequently to augment vestibular rehabilitation. Clinicians now have a better understanding of rehabilitation expectations and whom to refer based on evidence in order to improve functional outcomes for persons living with peripheral and central vestibular disorders.
SUMMARY: An up-to-date understanding of the evidence related to vestibular rehabilitation can assist the practicing clinician in making better clinical decisions for their patient and hopefully result in optimal functional recovery.

PMID: 30461465 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PgUhn3
via IFTTT

A Comparison of Two Recording Montages for Ocular Vestibular Evoked Myogenic Potentials in Patients with Superior Canal Dehiscence Syndrome.

Related Articles

A Comparison of Two Recording Montages for Ocular Vestibular Evoked Myogenic Potentials in Patients with Superior Canal Dehiscence Syndrome.

J Am Acad Audiol. 2018 Feb 08;:

Authors: Makowiec K, McCaslin D, Hatton K

Abstract
OBJECTIVE: The purpose of this investigation was to evaluate the sensitivity and specificity of the ocular vestibular evoked myogenic potential (oVEMP) using two electrode montages in patients with confirmed unilateral superior semicircular canal dehiscence syndrome (SCDS).
STUDY DESIGN: This study evaluated oVEMP response characteristics measured using two different electrode montages from 12 unilateral SCDS ears and 36 age-matched control ears (age range = 23-66). The oVEMP responses were elicited using 500 Hz tone-burst air conduction stimuli presented at an intensity of 95 dB nHL and a rate of 5.1/sec. The two electrode montages used are described as an "infraorbital" montage and a "belly-tendon" montage.
SETTING: Balance function laboratory embedded in a large, tertiary care otology clinic.
RESULTS: The belly-tendon electrode montage resulted in significantly larger amplitude responses than the infraorbital electrode montage for the ears with SCDS and the normal control ears. For both electrode montages the ear with SCDS exhibited a significantly larger amplitude response, ∼50% larger than the response amplitude from the normal control ear. The belly-tendon montage additionally produced larger median increases in amplitude compared with the infraorbital montage. Specifically, the median increase in oVEMP N1-P1 amplitudes using the belly-tendon montage was 39% greater in control ears, 76% greater in the SCDS ears, and 17% greater in the contralateral SCDS ears.
CONCLUSIONS: The belly-tendon electrode montage yields significantly larger oVEMP amplitude responses for participants with SCDS and normal control participants.

PMID: 30461389 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2OZCcd0
via IFTTT

[Gene therapy progress: hopes for Usher syndrome].

Icon for EDP Sciences Related Articles

[Gene therapy progress: hopes for Usher syndrome].

Med Sci (Paris). 2018 Oct;34(10):842-848

Authors: Calvet C, Lahlou G, Safieddine S

Abstract
Hearing and balance impairment are major concerns and a serious public health burden, as it affects millions of people worldwide, but still lacks an effective curative therapy. Recent breakthroughs in preclinical and clinical studies using viral gene therapy suggest that such an approach might succeed in curing many genetic diseases. Our actual understanding and the comprehensive analysis of the molecular bases of genetic deafness forms have provided the multiple bridges toward gene therapy to correct, replace, or modify the expression of defective endogenous genes involved in deafness. The aim of this review article is to summarize the recent advances in the restoration of cochlear and vestibular functions by local gene therapy in mouse models of Usher syndrome, the leading genetic cause of deafness associated with blindness in the world. We focus herein on therapeutic approaches with the highest potential for clinical application.

PMID: 30451679 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PfrpvD
via IFTTT

Acoustic and vestibular barometry; air pressure effects on hearing and equilibrium of unoperated and fenestrated ears.

Related Articles

Acoustic and vestibular barometry; air pressure effects on hearing and equilibrium of unoperated and fenestrated ears.

Ann Otol Rhinol Laryngol. 1949 Jun;58(2):323-44

Authors: JONES MF, EDMONDS FC

PMID: 18132921 [PubMed - indexed for MEDLINE]



from #Audiology via ola Kala on Inoreader https://ift.tt/2zIjpOK
via IFTTT

Wireless binaural hearing aid technology for telephone use and listening in wind noise.

Icon for Taylor & Francis Related Articles

Wireless binaural hearing aid technology for telephone use and listening in wind noise.

Int J Audiol. 2018 Nov 24;:1-7

Authors: Au A, Blakeley JM, Dowell RC, Rance G

Abstract
OBJECTIVE: To assess the speech perception benefits of binaural streaming technology for bilateral hearing aid users in two difficult listening conditions.
DESIGN: Two studies were conducted to compare hearing aid processing features relating to telephone use and wind noise. Speech perception testing was conducted in four different experimental conditions in each study.
STUDY SAMPLE: Ten bilaterally-aided children in each study.
RESULTS: Significant improvements in speech perception were obtained with a wireless feature for telephone use. Significant speech perception benefits were also obtained with wireless hearing aid features when listening to speech in simulated wind noise.
CONCLUSIONS: Binaural signal processing algorithms can significantly improve speech perception for bilateral hearing aid users in challenging listening situations.

PMID: 30474445 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2E0bGiB
via IFTTT

Patterns in the social representation of "hearing loss" across countries: how do demographic factors influence this representation?

Icon for Taylor & Francis Related Articles

Patterns in the social representation of "hearing loss" across countries: how do demographic factors influence this representation?

Int J Audiol. 2018 Dec;57(12):925-932

Authors: Germundsson P, Manchaiah V, Ratinaud P, Tympas A, Danermark B

Abstract
This study aims to understand patterns in the social representation of hearing loss reported by adults across different countries and explore the impact of different demographic factors on response patterns. The study used a cross-sectional survey design. Data were collected using a free association task and analysed using qualitative content analysis, cluster analysis and chi-square analysis. The study sample included 404 adults (18 years and over) in the general population from four countries (India, Iran, Portugal and UK). The cluster analysis included 380 responses out of 404 (94.06%) and resulted in five clusters. The clusters were named: (1) individual aspects; (2) aetiology; (3) the surrounding society; (4) limitations and (5) exposed. Various demographic factors (age, occupation type, education and country) showed an association with different clusters, although country of origin seemed to be associated with most clusters. The study results suggest that how hearing loss is represented in adults in general population varies and is mainly related to country of origin. These findings strengthen the argument about cross-cultural differences in perception of hearing loss, which calls for a need to make necessary accommodations while developing public health strategies about hearing loss.

PMID: 30468404 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2QhBoF8
via IFTTT

The relationships among verbal ability, executive function, and theory of mind in young children with cochlear implants.

Icon for Taylor & Francis Related Articles

The relationships among verbal ability, executive function, and theory of mind in young children with cochlear implants.

Int J Audiol. 2018 Dec;57(12):875-882

Authors: Liu M, Wu L, Wu W, Li G, Cai T, Liu J

Abstract
This study aims to examine the complex relationships among verbal ability (VA), executive function (EF), and theory of mind (ToM) in young Chinese children with cochlear implants (CCI). All participants were tested using a set of nine measures: one VA, one non-VA, three EF, and four ToM. Our study cohort comprised 82 children aged from 3.8 to 6.9 years, including 36 CCI and 46 children with normal hearing (CNH). CNH outperformed CCI on measures of VA, EF, and ToM. One of the EF tasks, inhibitory control, was significantly associated with ToM after controlling for VA. VA was the primary predictor of EF, while inhibitory control significantly predicted ToM. Our findings suggest that inhibitory control explains the association between EF and ToM, thereby supporting the hypothesis that EF may be a prerequisite for ToM.

PMID: 30465454 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2Qf5ZTT
via IFTTT

A preliminary study on time-compressed speech recognition in noise among teenage students who use personal listening devices.

Icon for Taylor & Francis Related Articles

A preliminary study on time-compressed speech recognition in noise among teenage students who use personal listening devices.

Int J Audiol. 2018 Nov 15;:1-7

Authors: Li K, Xia L, Zheng Z, Liu W, Yang X, Feng Y, Zhang C

Abstract
OBJECTIVE: To compare speech perception obtained with different time compression rates in teenagers that do or do not use personal listening devices (PLDs).
DESIGN: Teenagers in a high school were recruited to complete questionnaires reporting their recreational noise exposure using PLDs. The dose of individual recreational noise exposure was calculated. The individuals with the most and least doses of recreational noise were selected and grouped into PLD users and non-PLD users. Normal rate and time-compressed (60% and 70%) speech recognition in quiet and noisy conditions was measured.
STUDY SAMPLE: PLD user and non-PLD user group each included 20 participants.
RESULTS: ANOVA analysis showed that the effect of group, background, compression rate, and interactions between any two factors are significant. Post hoc analysis showed that the speech recognition scores with normal rate in quiet and noise and those obtained from time-compressed speech in the quiet condition were not significantly different between PLD users and non-PLD users. However, differences in the time-compressed speech recognition scores (60% and 70%) in noisy conditions between the two groups were statistically significant.
CONCLUSIONS: The fast-speed speech recognition in noise decreased significantly in PLD users compared with that in non-PLD users selected by extreme entertainment exposure.

PMID: 30442062 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2ONZdQ8
via IFTTT

International journal of audiology reviewer contact information.

Icon for Taylor & Francis Related Articles

International journal of audiology reviewer contact information.

Int J Audiol. 2018 Nov 15;:1-2

Authors: Preece JP

PMID: 30442049 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2zhPcWv
via IFTTT

Wireless binaural hearing aid technology for telephone use and listening in wind noise

.


from #Audiology via ola Kala on Inoreader https://ift.tt/2BRh5q3
via IFTTT

Patterns in the social representation of “hearing loss” across countries: how do demographic factors influence this representation?

Volume 57, Issue 12, December 2018, Page 925-932
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2KR4kyB
via IFTTT

The relationships among verbal ability, executive function, and theory of mind in young children with cochlear implants

Volume 57, Issue 12, December 2018, Page 875-882
.


from #Audiology via ola Kala on Inoreader https://ift.tt/2BPFX1y
via IFTTT

A swing arm device for the acoustic measurement of food texture.

Related Articles

A swing arm device for the acoustic measurement of food texture.

J Texture Stud. 2018 Nov 29;:

Authors: Akimoto H, Sakurai N, Blahovec J

Abstract
We developed a swing arm device for acoustic measurement of food texture, which resolved difficulties of food texture evaluation. The device has a structure of balance-style and a probe in the device is moved downward along with motion of swing arm according to the balance of weights at both ends of the swing arm. The probe was inserted into a food sample. The device measured displacement and acceleration of the probe on food fracture by probe insertion with high precision until the probe stops inserting into a food sample. Using the displacement and acceleration of the probe on fracture, we calculated three parameters to determine food texture. Energy Texture Index (ETI) which is probe kinetic energy of acoustic vibration was evaluated by the vibration on food fracture. Audible Energy Texture Index (aETI) could be introduced as food texture perceived by human sense of hearing, which was obtained by multiplying ETI by human hearing sensitivity. It was found that the ETI and aETI can be used for measurement of characteristic food texture detected at a tooth and perceived in brain, respectively. Food Friction Index (FFI) to explain the friction strength of a probe against a food sample was theoretically formulated under the condition of probe motion in the device. FFI was found to be useful not only for crispy food like biscuit but for soft food. The measured FFI indicated characteristic of smoothness of probe insertion into food sample. This article is protected by copyright. All rights reserved.

PMID: 30489633 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PdtqZb
via IFTTT

Extracochlear Stimulation of Electrically Evoked Auditory Brainstem Responses (eABRs) Remains the Preferred Pre-implant Auditory Nerve Function Test in an Assessor-blinded Comparison.

Icon for Wolters Kluwer Related Articles

Extracochlear Stimulation of Electrically Evoked Auditory Brainstem Responses (eABRs) Remains the Preferred Pre-implant Auditory Nerve Function Test in an Assessor-blinded Comparison.

Otol Neurotol. 2018 Nov 27;:

Authors: Causon A, O'Driscoll M, Stapleton E, Lloyd S, Freeman S, Munro KJ

Abstract
OBJECTIVE: Electrically evoked auditory brainstem responses (eABRs) can be recorded before cochlear implant (CI) surgery to verify auditory nerve function, and is particularly helpful in to assess the function of the auditory nerve in cases of auditory nerve hypoplasia. This is the first study to compare three preimplant eABRs recording techniques: 1) standard extracochlear, 2) novel intracochlear, and 3) conventional intracochlear with the CI.
STUDY DESIGN: A within-participants design was used where eABRs were sequentially measured during CI surgery using three methods with stimulation from: 1) an extracochlear electrode placed at the round window niche, 2) two different electrodes on a recently developed Intracochlear Test Array (ITA), and 3) two different electrodes on a CI electrode array.
SETTING: New adults implantees (n = 16) were recruited through the Manchester Auditory Implant Centre and eABR measurements were made in theater at the time of CI surgery.
PATIENTS: All participants met the clinical criteria for cochlear implantation. Only participants with radiologically normal auditory nerves were recruited to the study. All participants were surgically listed for either a MED-EL Synchrony implant or a Cochlear Nucleus Profile implant, per standard practice in the implant centre.
OUTCOME MEASURES: Primary outcome measures were: 1) charge (μC) required to elicit a threshold response, and 2) latencies (ms) in the threshold waveforms. Secondary outcome measures were: 1) morphologies of responses at suprathreshold stimulation levels and 2) wave V growth patterns.
RESULTS: eABRs were successfully measured from 15 participants. In terms of primary outcome measures, the charge required to elicit a response using the extracochlear electrode (median = 0.075 μC) was approximately six times larger than all other electrodes and the latency of wave V was approximately 0.5 ms longer when using the extracochlear electrode (mean = 5.1 ms). In terms of secondary outcomes, there were some minor quantitative differences in responses between extracochlear and intracochlear stimulation; in particular, ITA responses were highly variable in quality. The ITA responses were rated poor quality in 33% of recordings and in two instances did not allow for data collection. When not disrupted by open circuits, the median ITA response contained one more waveform than the median extracochlear response.
CONCLUSIONS: In this first study comparing intracochlear and extracochlear stimulation, the results show that both can be used to produce an eABR that is representative of the one elicited by the CI. In the majority of cases, extracochlear stimulation was the preferred approach for preimplant auditory nerve function testing because of consistency, recordings that could be analyzed, and because extracochlear placement of the electrode does not require a cochleostomy to insert an electrode.This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. https://ift.tt/OBJ4xP.

PMID: 30489452 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2zG7nW9
via IFTTT

Characterization of the Transcriptome of Hair Cell Regeneration in the Neonatal Mouse Utricle.

Icon for S. Karger AG, Basel, Switzerland Icon for S. Karger AG, Basel, Switzerland Related Articles

Characterization of the Transcriptome of Hair Cell Regeneration in the Neonatal Mouse Utricle.

Cell Physiol Biochem. 2018 Nov 28;51(3):1437-1447

Authors: Han J, Wu H, Hu H, Yang W, Dong H, Liu Y, Nie G

Abstract
BACKGROUND/AIMS: Hearing and balance deficits are mainly caused by loss of sensory inner ear hair cells. The key signals that control hair cell regeneration are of great interest. However, the molecular events by which the cellular signals mediate hair cell regeneration in the mouse utricle are largely unknown.
METHODS: In the present study, we investigated gene expression changes and related molecular pathways using RNA-seq and qRT-PCR in the newborn mouse utricle in response to neomycin-induced damage.
RESULTS: There were 302 and 624 genes that were found to be up-regulated and down-regulated in neomycin-treated samples. GO and KEGG pathway analyses of these genes revealed many deregulated cellular components, molecular functions, biological processes and signaling pathways that may be related to hair cell development. More importantly, the differentially expressed genes included 9 transcription factors from the zf-C2H2 family, and eight of them were consistently down-regulated during hair cell damage and subsequent regeneration.
CONCLUSION: Our results provide a valuable source for future studies and highlighted some promising genes, pathways or processes that may be useful for therapeutic applications.

PMID: 30485845 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2FPcN6x
via IFTTT

Disorders of the inner-ear balance organs and their pathways.

Icon for Elsevier Science Related Articles

Disorders of the inner-ear balance organs and their pathways.

Handb Clin Neurol. 2018;159:385-401

Authors: Young AS, Rosengren SM, Welgampola MS

Abstract
Disorders of the inner-ear balance organs can be grouped by their manner of presentation into acute, episodic, or chronic vestibular syndromes. A sudden unilateral vestibular injury produces severe vertigo, nausea, and imbalance lasting days, known as the acute vestibular syndrome (AVS). A bedside head impulse and oculomotor examination helps separate vestibular neuritis, the more common and innocuous cause of AVS, from stroke. Benign positional vertigo, a common cause of episodic positional vertigo, occurs when otoconia overlying the otolith membrane falls into the semicircular canals, producing brief spells of spinning vertigo triggered by head movement. Benign positional vertigo is diagnosed by a positional test, which triggers paroxysmal positional nystagmus in the plane of the affected semicircular canal. Episodic spontaneous vertigo caused by vestibular migraine and Ménière's disease can sometimes prove hard to separate. Typically, Ménière's disease is associated with spinning vertigo lasting hours, aural fullness, tinnitus, and fluctuating hearing loss while VM can produce spinning, rocking, or tilting sensations and light-headedness lasting minutes to days, sometimes but not always associated with migraine headaches or photophobia. Injury to both vestibular end-organs results in ataxia and oscillopsia rather than vertigo. Head impulse testing, dynamic visual acuity, and matted Romberg tests are abnormal while conventional neurologic assessments are normal. A defect in the bony roof overlying the superior semicircular canal produces vertigo and oscillopsia provoked by loud sound and pressure (when coughing or sneezing). Three-dimensional temporal bone computed tomography scan and vestibular evoked myogenic potential testing help confirm the diagnosis of superior canal dehiscence. Collectively, these clinical syndromes account for a large proportion of dizzy and unbalanced patients.

PMID: 30482329 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2RpLoJV
via IFTTT

DNA methylation dynamics during embryonic development and postnatal maturation of the mouse auditory sensory epithelium.

Icon for Nature Publishing Group Related Articles

DNA methylation dynamics during embryonic development and postnatal maturation of the mouse auditory sensory epithelium.

Sci Rep. 2018 Nov 26;8(1):17348

Authors: Yizhar-Barnea O, Valensisi C, Jayavelu ND, Kishore K, Andrus C, Koffler-Brill T, Ushakov K, Perl K, Noy Y, Bhonker Y, Pelizzola M, Hawkins RD, Avraham KB

Abstract
The inner ear is a complex structure responsible for hearing and balance, and organ pathology is associated with deafness and balance disorders. To evaluate the role of epigenomic dynamics, we performed whole genome bisulfite sequencing at key time points during the development and maturation of the mouse inner ear sensory epithelium (SE). Our single-nucleotide resolution maps revealed variations in both general characteristics and dynamics of DNA methylation over time. This allowed us to predict the location of non-coding regulatory regions and to identify several novel candidate regulatory factors, such as Bach2, that connect stage-specific regulatory elements to molecular features that drive the development and maturation of the SE. Constructing in silico regulatory networks around sites of differential methylation enabled us to link key inner ear regulators, such as Atoh1 and Stat3, to pathways responsible for cell lineage determination and maturation, such as the Notch pathway. We also discovered that a putative enhancer, defined as a low methylated region (LMR), can upregulate the GJB6 gene and a neighboring non-coding RNA. The study of inner ear SE methylomes revealed novel regulatory regions in the hearing organ, which may improve diagnostic capabilities, and has the potential to guide the development of therapeutics for hearing loss by providing multiple intervention points for manipulation of the auditory system.

PMID: 30478432 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2Pi68kF
via IFTTT

Tonotopy in calcium homeostasis and vulnerability of cochlear hair cells.

Icon for Elsevier Science Related Articles

Tonotopy in calcium homeostasis and vulnerability of cochlear hair cells.

Hear Res. 2018 Nov 16;:

Authors: Fettiplace R, Nam JH

Abstract
Ototoxicity, noise overstimulation, or aging, can all produce hearing loss with similar properties, in which outer hair cells (OHCs), principally those at the high-frequency base of the cochlea, are preferentially affected. We suggest that the differential vulnerability may partly arise from differences in Ca2+ balance among cochlear locations. Homeostasis is determined by three factors: Ca2+ influx mainly via mechanotransducer (MET) channels; buffering by calcium-binding proteins and organelles like mitochondria; and extrusion by the plasma membrane CaATPase pump. We review quantification of these parameters and use our experimentally-determined values to model changes in cytoplasmic and mitochondrial Ca2+ during Ca2+ influx through the MET channels. We suggest that, in OHCs, there are two distinct micro-compartments for Ca2+ handling, one in the hair bundle and the other in the cell soma. One conclusion of the modeling is that there is a tonotopic gradient in the ability of OHCs to handle the Ca2+ load, which correlates with their vulnerability to environmental challenges. High-frequency basal OHCs are the most susceptible because they have much larger MET currents and have smaller dimensions than low-frequency apical OHCs.

PMID: 30473131 [PubMed - as supplied by publisher]



from #Audiology via ola Kala on Inoreader https://ift.tt/2P4E4S3
via IFTTT

Internet-based interventions for adults with hearing loss, tinnitus and vestibular disorders: a protocol for a systematic review.

Icon for BioMed Central Related Articles

Internet-based interventions for adults with hearing loss, tinnitus and vestibular disorders: a protocol for a systematic review.

Syst Rev. 2018 Nov 23;7(1):205

Authors: Beukes EW, Manchaiah V, Baguley DM, Allen PM, Andersson G

Abstract
BACKGROUND: Internet-based interventions are emerging as an alternative way of delivering accessible healthcare for various conditions including hearing and balance disorders. A comprehensive review regarding the evidence-base of Internet-based interventions for auditory-related conditions is required to determine the existing evidence of their efficacy and effectiveness. The objective of the current protocol is to provide the methodology for a systematic review regarding the effects of Internet-based interventions for adults with hearing loss, tinnitus and vestibular disorders.
METHOD: This protocol was developed according to the Preferred Reporting Items for Systematic reviews and Meta-analyses for Protocols (PRISMA-P) 2015 guidelines. Electronic database searches will include EBSCOhost, PubMed and Cochrane Central Register performed by two researchers. This will be complemented by searching other resources such as the reference lists for included studies to identify studies meeting the eligibility for inclusion with regard to study designs, participants, interventions, comparators and outcomes. The Cochrane risk of bias tool (RoB 2) for randomised trials will be used for the bias assessments in the included studies. Criteria for conducting meta-analyses were defined.
DISCUSSION: The result of this systematic review will be of value to establish the effects of Internet-based interventions for hearing loss, tinnitus and vestibular disorders. This will be of importance to guide future planning of auditory intervention research and clinical services by healthcare providers, researchers, consumers and stakeholders.
SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42018094801.

PMID: 30470247 [PubMed - in process]



from #Audiology via ola Kala on Inoreader https://ift.tt/2PgUmal
via IFTTT