Παρασκευή 15 Σεπτεμβρίου 2017

Developmental Stuttering in Children Who Are Hard of Hearing

Purpose
A number of studies with large sample sizes have reported lower prevalence of stuttering in children with significant hearing loss compared to children without hearing loss. This study used a parent questionnaire to investigate the characteristics of stuttering (e.g., incidence, prevalence, and age of onset) in children who are hard of hearing (CHH).
Method
Three hundred three parents of CHH who participated in the Outcomes of Children With Hearing Loss study (Moeller & Tomblin, 2015) were sent questionnaires asking about their child's history of stuttering.
Results
One hundred ninety-four parents of CHH responded to the survey. Thirty-three CHH were reported to have stuttered at one point in time (an incidence of 17.01%), and 10 children were still stuttering at the time of survey submission (a prevalence of 5.15%). Compared to estimates in the general population, this sample displayed a significantly higher incidence and prevalence. The age of onset, recovery rate, and other characteristics were similar to hearing children.
Conclusions
Based on this sample, mild to moderately severe hearing loss does not appear to be a protective factor for stuttering in the preschool years. In fact, the incidence and prevalence of stuttering may be higher in this population compared to the general population. Despite the significant speech and language needs that children with mild to moderately severe hearing loss may have, speech-language pathologists should appropriately prioritize stuttering treatment as they would in the hearing population.
Supplemental Material
http://ift.tt/2x5UlyF

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_LSHSS-17-0028/2654658/Developmental-Stuttering-in-Children-Who-Are-Hard
via IFTTT

Developmental Stuttering in Children Who Are Hard of Hearing

Purpose
A number of studies with large sample sizes have reported lower prevalence of stuttering in children with significant hearing loss compared to children without hearing loss. This study used a parent questionnaire to investigate the characteristics of stuttering (e.g., incidence, prevalence, and age of onset) in children who are hard of hearing (CHH).
Method
Three hundred three parents of CHH who participated in the Outcomes of Children With Hearing Loss study (Moeller & Tomblin, 2015) were sent questionnaires asking about their child's history of stuttering.
Results
One hundred ninety-four parents of CHH responded to the survey. Thirty-three CHH were reported to have stuttered at one point in time (an incidence of 17.01%), and 10 children were still stuttering at the time of survey submission (a prevalence of 5.15%). Compared to estimates in the general population, this sample displayed a significantly higher incidence and prevalence. The age of onset, recovery rate, and other characteristics were similar to hearing children.
Conclusions
Based on this sample, mild to moderately severe hearing loss does not appear to be a protective factor for stuttering in the preschool years. In fact, the incidence and prevalence of stuttering may be higher in this population compared to the general population. Despite the significant speech and language needs that children with mild to moderately severe hearing loss may have, speech-language pathologists should appropriately prioritize stuttering treatment as they would in the hearing population.
Supplemental Material
http://ift.tt/2x5UlyF

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_LSHSS-17-0028/2654658/Developmental-Stuttering-in-Children-Who-Are-Hard
via IFTTT

Developmental Stuttering in Children Who Are Hard of Hearing

Purpose
A number of studies with large sample sizes have reported lower prevalence of stuttering in children with significant hearing loss compared to children without hearing loss. This study used a parent questionnaire to investigate the characteristics of stuttering (e.g., incidence, prevalence, and age of onset) in children who are hard of hearing (CHH).
Method
Three hundred three parents of CHH who participated in the Outcomes of Children With Hearing Loss study (Moeller & Tomblin, 2015) were sent questionnaires asking about their child's history of stuttering.
Results
One hundred ninety-four parents of CHH responded to the survey. Thirty-three CHH were reported to have stuttered at one point in time (an incidence of 17.01%), and 10 children were still stuttering at the time of survey submission (a prevalence of 5.15%). Compared to estimates in the general population, this sample displayed a significantly higher incidence and prevalence. The age of onset, recovery rate, and other characteristics were similar to hearing children.
Conclusions
Based on this sample, mild to moderately severe hearing loss does not appear to be a protective factor for stuttering in the preschool years. In fact, the incidence and prevalence of stuttering may be higher in this population compared to the general population. Despite the significant speech and language needs that children with mild to moderately severe hearing loss may have, speech-language pathologists should appropriately prioritize stuttering treatment as they would in the hearing population.
Supplemental Material
http://ift.tt/2x5UlyF

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_LSHSS-17-0028/2654658/Developmental-Stuttering-in-Children-Who-Are-Hard
via IFTTT

JAAA CEU Program.

Related Articles

JAAA CEU Program.

J Am Acad Audiol. 2017 Sep;28(8):770-771

Authors:

PMID: 28906247 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2yenms0
via IFTTT

The Relationship between Central Auditory Processing, Language, and Cognition in Children Being Evaluated for Central Auditory Processing Disorder.

Related Articles

The Relationship between Central Auditory Processing, Language, and Cognition in Children Being Evaluated for Central Auditory Processing Disorder.

J Am Acad Audiol. 2017 Sep;28(8):758-769

Authors: Brenneman L, Cash E, Chermak GD, Guenette L, Masters G, Musiek FE, Brown M, Ceruti J, Fitzegerald K, Geissler K, Gonzalez J, Weihing J

Abstract
BACKGROUND: Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample.
PURPOSE: The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC).
RESEARCH DESIGN: A retrospective study.
STUDY SAMPLE: Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34.
DATA COLLECTION AND ANALYSIS: Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis.
RESULTS: DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis.
CONCLUSIONS: While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these CAPD measures went unexplained by cognition. Unlike DD and FP, the CS test was not correlated with cognition. Additionally, language measures were not significantly correlated with any of the CAPD tests. Our findings emphasize that the outcomes and interpretation of results vary as a function of the subject inclusion criteria that are applied for the CELF and WISC. Including participants with poorer cognition and/or language scores increased the number of significant correlations observed. For this reason, it is important that studies investigating the relationship between CAPD and other domains or disorders report the specific inclusion criteria used for all tests.

PMID: 28906246 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2faJIWV
via IFTTT

Potential Audiological and MRI Markers of Tinnitus.

Related Articles

Potential Audiological and MRI Markers of Tinnitus.

J Am Acad Audiol. 2017 Sep;28(8):742-757

Authors: Gopal KV, Thomas BP, Nandy R, Mao D, Lu H

Abstract
BACKGROUND: Subjective tinnitus, or ringing sensation in the ear, is a common disorder with no accepted objective diagnostic markers.
PURPOSE: The purpose of this study was to identify possible objective markers of tinnitus by combining audiological and imaging-based techniques.
RESEARCH DESIGN: Case-control studies.
STUDY SAMPLE: Twenty adults drawn from our audiology clinic served as participants. The tinnitus group consisted of ten participants with chronic bilateral constant tinnitus, and the control group consisted of ten participants with no history of tinnitus. Each participant with tinnitus was closely matched with a control participant on the basis of age, gender, and hearing thresholds.
DATA COLLECTION AND ANALYSES: Data acquisition focused on systematic administration and evaluation of various audiological tests, including auditory-evoked potentials (AEP) and otoacoustic emissions, and magnetic resonance imaging (MRI) tests. A total of 14 objective test measures (predictors) obtained from audiological and MRI tests were subjected to statistical analyses to identify the best predictors of tinnitus group membership. The least absolute shrinkage and selection operator technique for feature extraction, supplemented by the leave-one-out cross-validation technique, were used to extract the best predictors. This approach provided a conservative model that was highly regularized with its error within 1 standard error of the minimum.
RESULTS: The model selected increased frontal cortex (FC) functional MRI activity to pure tones matching their respective tinnitus pitch, and augmented AEP wave N₁ amplitude growth in the tinnitus group as the top two predictors of tinnitus group membership. These findings suggest that the amplified responses to acoustic signals and hyperactivity in attention regions of the brain may be a result of overattention among individuals that experience chronic tinnitus.
CONCLUSIONS: These results suggest that increased functional MRI activity in the FC to sounds and augmented N₁ amplitude growth may potentially be the objective diagnostic indicators of tinnitus. However, due to the small sample size and lack of subgroups within the tinnitus population in this study, more research is needed before generalizing these findings.

PMID: 28906245 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2h6kqGi
via IFTTT

Hearing Aid Use and Mild Hearing Impairment: Learnings from Big Data.

Related Articles

Hearing Aid Use and Mild Hearing Impairment: Learnings from Big Data.

J Am Acad Audiol. 2017 Sep;28(8):731-741

Authors: Timmer BHB, Hickson L, Launer S

Abstract
BACKGROUND: Previous research, mostly reliant on self-reports, has indicated that hearing aid (HA) use is related to the degree of hearing impairment (HI). No large-scale investigation of the relationship between data-logged HA use and HI has been conducted to date.
PURPOSE: This study aimed to investigate if objective measures of overall daily HA use and HA use in various listening environments are different for adults with mild HI compared to adults with moderate HI.
RESEARCH DESIGN: This retrospective study used data extracted from a database of fitting appointments from an international group of HA providers. Only data from the participants' most recent fitting appointment were included in the final dataset.
STUDY SAMPLE: A total of 8,489 bilateral HA fittings of adults over the age of 18 yr, conducted between January 2013 and June 2014, were included. Participants were subsequently allocated to HI groups, based on British Society of Audiology and American Speech-Language-Hearing Association audiometric descriptors.
DATA COLLECTION AND ANALYSIS: Fitting data from participating HA providers were regularly transferred to a central server. The data, with all personal information except age and gender removed, contained participants' four-frequency average (at 500, 1000, 2000, and 4000 Hz) as well as information on HA characteristics and usage. Following data cleaning, bivariate and post hoc statistical analyses were conducted.
RESULTS: The total sample of adults' average daily HA use was 8.52 hr (interquartile range [IQR] = 5.49-11.77) in the left ear and 8.51 hr (IQR = 5.49-11.72) in the right ear. With a few exceptions, there were no statistical differences between hours of HA use for participants with mild HI compared to those with moderate impairment. Across all mild and moderate HI groups, the most common overall HA usage was between 8 and 12 hr per day. Other factors such as age, gender, and HA style also showed no relationship to hours of use. HAs were used, on average, for 7 hr (IQR = 4.27-9.96) per day in quiet and 1 hr (IQR = 0.33-1.41) per day in noisy listening situations.
CONCLUSIONS: Clinical populations with mild HI use HAs as frequently as those with a moderate HI. These findings support the recommendation of HAs for adults with milder degrees of HI.

PMID: 28906244 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2fnBhV4
via IFTTT

Pediatric Hearing Aid Management: Challenges among Hispanic Families.

Related Articles

Pediatric Hearing Aid Management: Challenges among Hispanic Families.

J Am Acad Audiol. 2017 Sep;28(8):718-730

Authors: Caballero A, Muñoz K, White K, Nelson L, Domenech-Rodriguez M, Twohig M

Abstract
BACKGROUND: Hearing aid fitting in infancy has become more common in the United States as a result of earlier identification of hearing loss. Consistency of hearing aid use is an area of concern for young children, as well as other hearing aid management challenges parents encounter that may contribute to less-than-optimal speech and language outcomes. Research that describes parent hearing aid management experiences of Spanish-speaking Hispanic families, or the extent of their needs, is not available. To effectively support parent learning, in a culturally sensitive manner, providers may benefit from having a better understanding of the needs and challenges Hispanic families experience with hearing aid management.
PURPOSE: The purpose of the current study was to describe challenges with hearing aid management and use for children from birth to 5 yr of age, as reported by Spanish-speaking parents in the United States, and factors that may influence hearing aid use.
RESEARCH DESIGN: This study used a cross-sectional survey design.
STUDY SAMPLE: Forty-two Spanish-speaking parents of children up to 5 yr of age who had been fitted with hearing aids.
DATA COLLECTION AND ANALYSIS: Responses were obtained from surveys mailed to parents through early intervention programs and audiology clinics. Descriptive statistics were used to describe frequencies and variance in responses.
RESULTS: Forty-seven percent of the parents reported the need for help from an interpreter during audiology appointments. Even though parents received information and were taught skills by their audiologist, many wanted to receive more information. For example, 59% wanted to know how to meet other parents of children who have hearing loss, although 88% had previously received this information; 56% wanted to know how to do basic hearing aid maintenance, although 71% had previously received instruction. The two most frequently reported hearing aid use challenges were fear of losing the hearing aids, and not seeing benefit from the hearing aids. Hearing aid use during all waking hours was reported by more parents (66%) when their child had a good day than when their child had a bad day (37%); during the previous two weeks, 35% of the parents indicated their child had all good days.
CONCLUSIONS: Hispanic parents wanted more comprehensive information, concrete resources, and emotional support from the audiologist to overcome hearing aid management challenges. Understanding parents' perspectives, experiences, and challenges is critical for audiologists to provide appropriate support in a culturally sensitive manner and to effectively address families' needs.

PMID: 28906243 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2fagE1K
via IFTTT

Safe Use of Acoustic Vestibular-Evoked Myogenic Potential Stimuli: Protocol and Patient-Specific Considerations.

Related Articles

Safe Use of Acoustic Vestibular-Evoked Myogenic Potential Stimuli: Protocol and Patient-Specific Considerations.

J Am Acad Audiol. 2017 Sep;28(8):708-717

Authors: Portnuff CDF, Kleindienst S, Bogle JM

Abstract
BACKGROUND: Vestibular-evoked myogenic potentials (VEMPs) are commonly used clinical assessments for patients with complaints of dizziness. However, relatively high air-conducted stimuli are required to elicit the VEMP, and ultimately may compromise safe noise exposure limits. Recently, research has reported the potential for noise-induced hearing loss (NIHL) from VEMP stimulus exposure through studies of reduced otoacoustic emission levels after VEMP testing, as well as a recent case study showing permanent sensorineural hearing loss associated with VEMP exposure.
PURPOSE: The purpose of this report is to review the potential for hazardous noise exposure from VEMP stimuli and to suggest clinical parameters for safe VEMP testing.
RESEARCH DESIGN: Literature review with presentation of clinical guidelines and a clinical tool for estimating noise exposure.
RESULTS: The literature surrounding VEMP stimulus-induced hearing loss is reviewed, including several cases of overexposure. The article then presents a clinical calculation tool for the estimation of a patient's safe noise exposure from VEMP stimuli, considering stimulus parameters, and includes a discussion of how varying stimulus parameters affect a patient's noise exposure. Finally, recommendations are provided for recognizing and managing specific patient populations who may be at higher risk for NIHL from VEMP stimulus exposure. A sample protocol is provided that allows for safe noise exposure.
CONCLUSIONS: VEMP stimuli have the potential to cause NIHL due to high sound exposure levels. However, with proper safety protocols in place, clinicians may reduce or eliminate this risk to their patients. Use of the tools provided, including the noise exposure calculation tool and sample protocols, may help clinicians to understand and ensure safe use of VEMP stimuli.

PMID: 28906242 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2x1il8h
via IFTTT

Tracking of Noise Tolerance to Measure Hearing Aid Benefit.

Related Articles

Tracking of Noise Tolerance to Measure Hearing Aid Benefit.

J Am Acad Audiol. 2017 Sep;28(8):698-707

Authors: Kuk F, Seper E, Lau CC, Korhonen P

Abstract
BACKGROUND: The benefits offered by noise reduction (NR) features on a hearing aid had been studied traditionally using test conditions that set the hearing aids into a stable state of performance. While adequate, this approach does not allow the differentiation of two NR algorithms that differ in their timing characteristics (i.e., activation and stabilization time).
PURPOSE: The current study investigated a new method of measuring noise tolerance (Tracking of Noise Tolerance [TNT]) as a means to differentiate hearing aid technologies. The study determined the within-session and between-session reliability of the procedure. The benefits provided by various hearing aid conditions (aided, two NR algorithms, and a directional microphone algorithm) were measured using this procedure. Performance on normal-hearing listeners was also measured for referencing.
RESEARCH DESIGN: A single-blinded, repeated-measures design was used.
STUDY SAMPLE: Thirteen experienced hearing aid wearers with a bilaterally symmetrical (≤10 dB) mild-to-moderate sensorineural hearing loss participated in the study. In addition, seven normal-hearing listeners were tested in the unaided condition.
DATA COLLECTION AND ANALYSIS: Participants tracked the noise level that met the criterion of tolerable noise level (TNL) in the presence of an 85 dB SPL continuous discourse passage. The test conditions included an unaided condition and an aided condition with combinations of NR and microphone modes within the UNIQUE hearing aid (omnidirectional microphone, no NR; omnidirectional microphone, NR; directional microphone, no NR; and directional microphone, NR) and the DREAM hearing aid (omnidirectional microphone, no NR; omnidirectional microphone, NR). Each tracking trial lasted 2 min for each hearing aid condition. Normal-hearing listeners tracked in the unaided condition only. Nine of the 13 hearing-impaired listeners returned after 3 mo for retesting in the unaided and aided conditions with the UNIQUE hearing aid. The individual TNL was estimated for each participant for all test conditions. The TNT index was calculated as the difference between 85 dB SPL and the TNL.
RESULTS: The TNT index varied from 2.2 dB in the omnidirectional microphone, no NR condition to -4.4 dB in the directional microphone, NR on condition. Normal-hearing listeners reported a TNT index of -5.7 dB using this procedure. The averaged improvement in TNT offered by the NR algorithm on the UNIQUE varied from 2.1 dB when used with a directional microphone to 3.0 dB when used with the omnidirectional microphone. The time course of the NR algorithm was different between the UNIQUE and the DREAM hearing aids, with the UNIQUE reaching a stable TNL sooner than the DREAM. The averaged improvement in TNT index from the UNIQUE directional microphone was 3.6 dB when NR was activated and 4.4 dB when NR was deactivated. Together, directional microphone and NR resulted in a total TNT improvement of 6.5 dB. The test-retest reliability of the procedure was high, with an intrasession 95% confidence interval (CI) of 2.2 dB and an intersession 95% CI of 4.2 dB.
CONCLUSIONS: The effect of the NR and directional microphone algorithms was measured to be 2-3 and 3.6-4.4 dB, respectively, using the TNT procedure. Because of its tracking property and reliability, this procedure may hold promise in differentiating among some hearing aid features that also differ in their time course of action.

PMID: 28906241 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2wwcFyY
via IFTTT

Listening Effort Measured in Adults with Normal Hearing and Cochlear Implants.

Related Articles

Listening Effort Measured in Adults with Normal Hearing and Cochlear Implants.

J Am Acad Audiol. 2017 Sep;28(8):685-697

Authors: Perreau AE, Wu YH, Tatge B, Irwin D, Corts D

Abstract
BACKGROUND: Studies have examined listening effort in individuals with hearing loss to determine the extent of the impairment. Regarding cochlear implants (CIs), results suggest that listening effort is improved using bilateral CIs compared to unilateral CIs. Few studies have investigated listening effort and outcomes related to the hybrid CI.
PURPOSE: Here, we compared listening effort across three CI groups, and to a normal-hearing control group. The impact of listener traits, that is, age, age at onset of hearing loss, duration of CI use, and working memory capacity, were examined relative to listening effort.
RESEARCH DESIGN: The participants completed a dual-task paradigm with a primary task identifying sentences in noise and a secondary task measuring reaction time on a Stroop test. Performance was assessed for all participant groups at different signal-to-noise ratios (SNRs), ranging in 2-dB steps from 0 to +10 dB relative to an individual's SNR-50, at which the speech recognition performance is 50% correct. Participants completed three questions on listening effort, the Spatial Hearing Questionnaire, and a reading span test.
STUDY SAMPLE: All 46 participants were adults. The four participant groups included (1) 12 individuals with normal hearing, (2) 10 with unilateral CIs, (3) 12 with bilateral CIs, and (4) 12 with a hybrid short-electrode CI and bilateral residual hearing.
DATA COLLECTION AND ANALYSIS: Results from the dual-task experiment were compared using a mixed 4 (hearing group) by 6 (SNR condition) analysis of variance (ANOVA). Questionnaire results were compared using one-way ANOVAs, and correlations between listener traits and the objective and subjective measures were compared using Pearson correlation coefficients.
RESULTS: Significant differences were found in speech perception among the normal-hearing and the unilateral and the bilateral CI groups. There was no difference in primary task performance among the hybrid CI and the normal-hearing groups. Across the six SNR conditions, listening effort improved to a greater degree for the normal-hearing group compared to the CI groups. However, there was no significant difference in listening effort between the CI groups. The subjective measures revealed significant differences between the normal-hearing and CI groups, but no difference among the three CI groups. Across all groups, age was significantly correlated with listening effort. We found no relationship between listening effort and the age at the onset of hearing loss, age at implantation, the duration of CI use, and working memory capacity for these participants.
CONCLUSIONS: Listening effort was reduced to a greater degree for the normal-hearing group compared to the CI users. There was no significant difference in listening effort among the CI groups. For the CI users in this study, age was a significant factor with regard to listening effort, whereas other variables such as the duration of CI use and the age at the onset of hearing loss were not significantly related to listening effort.

PMID: 28906240 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2vXTLl2
via IFTTT

Auditory Processing Testing: In the Booth versus Outside the Booth.

Related Articles

Auditory Processing Testing: In the Booth versus Outside the Booth.

J Am Acad Audiol. 2017 Sep;28(8):679-684

Authors: Lucker JR

Abstract
BACKGROUND: Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms.
PURPOSE: The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school.
RESEARCH DESIGN: A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations.
STUDY SAMPLE: Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced.
DATA COLLECTION AND ANALYSIS: Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth.
RESULTS: No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done, that is, in the booth or in the room.
CONCLUSIONS: Results from the present study indicate that one can obtain the same results on auditory processing tests, regardless of whether testing is completed in a soundproof booth or in a quiet test environment. Therefore, audiologists should not be required to test for auditory processing in a soundproof booth. This study shows that audiologists can conduct testing in a quiet room so long as the background noise is sufficiently controlled.

PMID: 28906239 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ydjPu9
via IFTTT

Safe Stimulus Intensities for VEMP Testing.

Related Articles

Safe Stimulus Intensities for VEMP Testing.

J Am Acad Audiol. 2017 Sep;28(8):678

Authors: Jacobson GP

PMID: 28906238 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ydqowA
via IFTTT

JAAA CEU Program.

Related Articles

JAAA CEU Program.

J Am Acad Audiol. 2017 Sep;28(8):770-771

Authors:

PMID: 28906247 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2yenms0
via IFTTT

The Relationship between Central Auditory Processing, Language, and Cognition in Children Being Evaluated for Central Auditory Processing Disorder.

Related Articles

The Relationship between Central Auditory Processing, Language, and Cognition in Children Being Evaluated for Central Auditory Processing Disorder.

J Am Acad Audiol. 2017 Sep;28(8):758-769

Authors: Brenneman L, Cash E, Chermak GD, Guenette L, Masters G, Musiek FE, Brown M, Ceruti J, Fitzegerald K, Geissler K, Gonzalez J, Weihing J

Abstract
BACKGROUND: Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample.
PURPOSE: The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC).
RESEARCH DESIGN: A retrospective study.
STUDY SAMPLE: Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34.
DATA COLLECTION AND ANALYSIS: Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis.
RESULTS: DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis.
CONCLUSIONS: While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these CAPD measures went unexplained by cognition. Unlike DD and FP, the CS test was not correlated with cognition. Additionally, language measures were not significantly correlated with any of the CAPD tests. Our findings emphasize that the outcomes and interpretation of results vary as a function of the subject inclusion criteria that are applied for the CELF and WISC. Including participants with poorer cognition and/or language scores increased the number of significant correlations observed. For this reason, it is important that studies investigating the relationship between CAPD and other domains or disorders report the specific inclusion criteria used for all tests.

PMID: 28906246 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2faJIWV
via IFTTT

Potential Audiological and MRI Markers of Tinnitus.

Related Articles

Potential Audiological and MRI Markers of Tinnitus.

J Am Acad Audiol. 2017 Sep;28(8):742-757

Authors: Gopal KV, Thomas BP, Nandy R, Mao D, Lu H

Abstract
BACKGROUND: Subjective tinnitus, or ringing sensation in the ear, is a common disorder with no accepted objective diagnostic markers.
PURPOSE: The purpose of this study was to identify possible objective markers of tinnitus by combining audiological and imaging-based techniques.
RESEARCH DESIGN: Case-control studies.
STUDY SAMPLE: Twenty adults drawn from our audiology clinic served as participants. The tinnitus group consisted of ten participants with chronic bilateral constant tinnitus, and the control group consisted of ten participants with no history of tinnitus. Each participant with tinnitus was closely matched with a control participant on the basis of age, gender, and hearing thresholds.
DATA COLLECTION AND ANALYSES: Data acquisition focused on systematic administration and evaluation of various audiological tests, including auditory-evoked potentials (AEP) and otoacoustic emissions, and magnetic resonance imaging (MRI) tests. A total of 14 objective test measures (predictors) obtained from audiological and MRI tests were subjected to statistical analyses to identify the best predictors of tinnitus group membership. The least absolute shrinkage and selection operator technique for feature extraction, supplemented by the leave-one-out cross-validation technique, were used to extract the best predictors. This approach provided a conservative model that was highly regularized with its error within 1 standard error of the minimum.
RESULTS: The model selected increased frontal cortex (FC) functional MRI activity to pure tones matching their respective tinnitus pitch, and augmented AEP wave N₁ amplitude growth in the tinnitus group as the top two predictors of tinnitus group membership. These findings suggest that the amplified responses to acoustic signals and hyperactivity in attention regions of the brain may be a result of overattention among individuals that experience chronic tinnitus.
CONCLUSIONS: These results suggest that increased functional MRI activity in the FC to sounds and augmented N₁ amplitude growth may potentially be the objective diagnostic indicators of tinnitus. However, due to the small sample size and lack of subgroups within the tinnitus population in this study, more research is needed before generalizing these findings.

PMID: 28906245 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2h6kqGi
via IFTTT

Hearing Aid Use and Mild Hearing Impairment: Learnings from Big Data.

Related Articles

Hearing Aid Use and Mild Hearing Impairment: Learnings from Big Data.

J Am Acad Audiol. 2017 Sep;28(8):731-741

Authors: Timmer BHB, Hickson L, Launer S

Abstract
BACKGROUND: Previous research, mostly reliant on self-reports, has indicated that hearing aid (HA) use is related to the degree of hearing impairment (HI). No large-scale investigation of the relationship between data-logged HA use and HI has been conducted to date.
PURPOSE: This study aimed to investigate if objective measures of overall daily HA use and HA use in various listening environments are different for adults with mild HI compared to adults with moderate HI.
RESEARCH DESIGN: This retrospective study used data extracted from a database of fitting appointments from an international group of HA providers. Only data from the participants' most recent fitting appointment were included in the final dataset.
STUDY SAMPLE: A total of 8,489 bilateral HA fittings of adults over the age of 18 yr, conducted between January 2013 and June 2014, were included. Participants were subsequently allocated to HI groups, based on British Society of Audiology and American Speech-Language-Hearing Association audiometric descriptors.
DATA COLLECTION AND ANALYSIS: Fitting data from participating HA providers were regularly transferred to a central server. The data, with all personal information except age and gender removed, contained participants' four-frequency average (at 500, 1000, 2000, and 4000 Hz) as well as information on HA characteristics and usage. Following data cleaning, bivariate and post hoc statistical analyses were conducted.
RESULTS: The total sample of adults' average daily HA use was 8.52 hr (interquartile range [IQR] = 5.49-11.77) in the left ear and 8.51 hr (IQR = 5.49-11.72) in the right ear. With a few exceptions, there were no statistical differences between hours of HA use for participants with mild HI compared to those with moderate impairment. Across all mild and moderate HI groups, the most common overall HA usage was between 8 and 12 hr per day. Other factors such as age, gender, and HA style also showed no relationship to hours of use. HAs were used, on average, for 7 hr (IQR = 4.27-9.96) per day in quiet and 1 hr (IQR = 0.33-1.41) per day in noisy listening situations.
CONCLUSIONS: Clinical populations with mild HI use HAs as frequently as those with a moderate HI. These findings support the recommendation of HAs for adults with milder degrees of HI.

PMID: 28906244 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2fnBhV4
via IFTTT

Pediatric Hearing Aid Management: Challenges among Hispanic Families.

Related Articles

Pediatric Hearing Aid Management: Challenges among Hispanic Families.

J Am Acad Audiol. 2017 Sep;28(8):718-730

Authors: Caballero A, Muñoz K, White K, Nelson L, Domenech-Rodriguez M, Twohig M

Abstract
BACKGROUND: Hearing aid fitting in infancy has become more common in the United States as a result of earlier identification of hearing loss. Consistency of hearing aid use is an area of concern for young children, as well as other hearing aid management challenges parents encounter that may contribute to less-than-optimal speech and language outcomes. Research that describes parent hearing aid management experiences of Spanish-speaking Hispanic families, or the extent of their needs, is not available. To effectively support parent learning, in a culturally sensitive manner, providers may benefit from having a better understanding of the needs and challenges Hispanic families experience with hearing aid management.
PURPOSE: The purpose of the current study was to describe challenges with hearing aid management and use for children from birth to 5 yr of age, as reported by Spanish-speaking parents in the United States, and factors that may influence hearing aid use.
RESEARCH DESIGN: This study used a cross-sectional survey design.
STUDY SAMPLE: Forty-two Spanish-speaking parents of children up to 5 yr of age who had been fitted with hearing aids.
DATA COLLECTION AND ANALYSIS: Responses were obtained from surveys mailed to parents through early intervention programs and audiology clinics. Descriptive statistics were used to describe frequencies and variance in responses.
RESULTS: Forty-seven percent of the parents reported the need for help from an interpreter during audiology appointments. Even though parents received information and were taught skills by their audiologist, many wanted to receive more information. For example, 59% wanted to know how to meet other parents of children who have hearing loss, although 88% had previously received this information; 56% wanted to know how to do basic hearing aid maintenance, although 71% had previously received instruction. The two most frequently reported hearing aid use challenges were fear of losing the hearing aids, and not seeing benefit from the hearing aids. Hearing aid use during all waking hours was reported by more parents (66%) when their child had a good day than when their child had a bad day (37%); during the previous two weeks, 35% of the parents indicated their child had all good days.
CONCLUSIONS: Hispanic parents wanted more comprehensive information, concrete resources, and emotional support from the audiologist to overcome hearing aid management challenges. Understanding parents' perspectives, experiences, and challenges is critical for audiologists to provide appropriate support in a culturally sensitive manner and to effectively address families' needs.

PMID: 28906243 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2fagE1K
via IFTTT

Safe Use of Acoustic Vestibular-Evoked Myogenic Potential Stimuli: Protocol and Patient-Specific Considerations.

Related Articles

Safe Use of Acoustic Vestibular-Evoked Myogenic Potential Stimuli: Protocol and Patient-Specific Considerations.

J Am Acad Audiol. 2017 Sep;28(8):708-717

Authors: Portnuff CDF, Kleindienst S, Bogle JM

Abstract
BACKGROUND: Vestibular-evoked myogenic potentials (VEMPs) are commonly used clinical assessments for patients with complaints of dizziness. However, relatively high air-conducted stimuli are required to elicit the VEMP, and ultimately may compromise safe noise exposure limits. Recently, research has reported the potential for noise-induced hearing loss (NIHL) from VEMP stimulus exposure through studies of reduced otoacoustic emission levels after VEMP testing, as well as a recent case study showing permanent sensorineural hearing loss associated with VEMP exposure.
PURPOSE: The purpose of this report is to review the potential for hazardous noise exposure from VEMP stimuli and to suggest clinical parameters for safe VEMP testing.
RESEARCH DESIGN: Literature review with presentation of clinical guidelines and a clinical tool for estimating noise exposure.
RESULTS: The literature surrounding VEMP stimulus-induced hearing loss is reviewed, including several cases of overexposure. The article then presents a clinical calculation tool for the estimation of a patient's safe noise exposure from VEMP stimuli, considering stimulus parameters, and includes a discussion of how varying stimulus parameters affect a patient's noise exposure. Finally, recommendations are provided for recognizing and managing specific patient populations who may be at higher risk for NIHL from VEMP stimulus exposure. A sample protocol is provided that allows for safe noise exposure.
CONCLUSIONS: VEMP stimuli have the potential to cause NIHL due to high sound exposure levels. However, with proper safety protocols in place, clinicians may reduce or eliminate this risk to their patients. Use of the tools provided, including the noise exposure calculation tool and sample protocols, may help clinicians to understand and ensure safe use of VEMP stimuli.

PMID: 28906242 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2x1il8h
via IFTTT

Tracking of Noise Tolerance to Measure Hearing Aid Benefit.

Related Articles

Tracking of Noise Tolerance to Measure Hearing Aid Benefit.

J Am Acad Audiol. 2017 Sep;28(8):698-707

Authors: Kuk F, Seper E, Lau CC, Korhonen P

Abstract
BACKGROUND: The benefits offered by noise reduction (NR) features on a hearing aid had been studied traditionally using test conditions that set the hearing aids into a stable state of performance. While adequate, this approach does not allow the differentiation of two NR algorithms that differ in their timing characteristics (i.e., activation and stabilization time).
PURPOSE: The current study investigated a new method of measuring noise tolerance (Tracking of Noise Tolerance [TNT]) as a means to differentiate hearing aid technologies. The study determined the within-session and between-session reliability of the procedure. The benefits provided by various hearing aid conditions (aided, two NR algorithms, and a directional microphone algorithm) were measured using this procedure. Performance on normal-hearing listeners was also measured for referencing.
RESEARCH DESIGN: A single-blinded, repeated-measures design was used.
STUDY SAMPLE: Thirteen experienced hearing aid wearers with a bilaterally symmetrical (≤10 dB) mild-to-moderate sensorineural hearing loss participated in the study. In addition, seven normal-hearing listeners were tested in the unaided condition.
DATA COLLECTION AND ANALYSIS: Participants tracked the noise level that met the criterion of tolerable noise level (TNL) in the presence of an 85 dB SPL continuous discourse passage. The test conditions included an unaided condition and an aided condition with combinations of NR and microphone modes within the UNIQUE hearing aid (omnidirectional microphone, no NR; omnidirectional microphone, NR; directional microphone, no NR; and directional microphone, NR) and the DREAM hearing aid (omnidirectional microphone, no NR; omnidirectional microphone, NR). Each tracking trial lasted 2 min for each hearing aid condition. Normal-hearing listeners tracked in the unaided condition only. Nine of the 13 hearing-impaired listeners returned after 3 mo for retesting in the unaided and aided conditions with the UNIQUE hearing aid. The individual TNL was estimated for each participant for all test conditions. The TNT index was calculated as the difference between 85 dB SPL and the TNL.
RESULTS: The TNT index varied from 2.2 dB in the omnidirectional microphone, no NR condition to -4.4 dB in the directional microphone, NR on condition. Normal-hearing listeners reported a TNT index of -5.7 dB using this procedure. The averaged improvement in TNT offered by the NR algorithm on the UNIQUE varied from 2.1 dB when used with a directional microphone to 3.0 dB when used with the omnidirectional microphone. The time course of the NR algorithm was different between the UNIQUE and the DREAM hearing aids, with the UNIQUE reaching a stable TNL sooner than the DREAM. The averaged improvement in TNT index from the UNIQUE directional microphone was 3.6 dB when NR was activated and 4.4 dB when NR was deactivated. Together, directional microphone and NR resulted in a total TNT improvement of 6.5 dB. The test-retest reliability of the procedure was high, with an intrasession 95% confidence interval (CI) of 2.2 dB and an intersession 95% CI of 4.2 dB.
CONCLUSIONS: The effect of the NR and directional microphone algorithms was measured to be 2-3 and 3.6-4.4 dB, respectively, using the TNT procedure. Because of its tracking property and reliability, this procedure may hold promise in differentiating among some hearing aid features that also differ in their time course of action.

PMID: 28906241 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2wwcFyY
via IFTTT

Listening Effort Measured in Adults with Normal Hearing and Cochlear Implants.

Related Articles

Listening Effort Measured in Adults with Normal Hearing and Cochlear Implants.

J Am Acad Audiol. 2017 Sep;28(8):685-697

Authors: Perreau AE, Wu YH, Tatge B, Irwin D, Corts D

Abstract
BACKGROUND: Studies have examined listening effort in individuals with hearing loss to determine the extent of the impairment. Regarding cochlear implants (CIs), results suggest that listening effort is improved using bilateral CIs compared to unilateral CIs. Few studies have investigated listening effort and outcomes related to the hybrid CI.
PURPOSE: Here, we compared listening effort across three CI groups, and to a normal-hearing control group. The impact of listener traits, that is, age, age at onset of hearing loss, duration of CI use, and working memory capacity, were examined relative to listening effort.
RESEARCH DESIGN: The participants completed a dual-task paradigm with a primary task identifying sentences in noise and a secondary task measuring reaction time on a Stroop test. Performance was assessed for all participant groups at different signal-to-noise ratios (SNRs), ranging in 2-dB steps from 0 to +10 dB relative to an individual's SNR-50, at which the speech recognition performance is 50% correct. Participants completed three questions on listening effort, the Spatial Hearing Questionnaire, and a reading span test.
STUDY SAMPLE: All 46 participants were adults. The four participant groups included (1) 12 individuals with normal hearing, (2) 10 with unilateral CIs, (3) 12 with bilateral CIs, and (4) 12 with a hybrid short-electrode CI and bilateral residual hearing.
DATA COLLECTION AND ANALYSIS: Results from the dual-task experiment were compared using a mixed 4 (hearing group) by 6 (SNR condition) analysis of variance (ANOVA). Questionnaire results were compared using one-way ANOVAs, and correlations between listener traits and the objective and subjective measures were compared using Pearson correlation coefficients.
RESULTS: Significant differences were found in speech perception among the normal-hearing and the unilateral and the bilateral CI groups. There was no difference in primary task performance among the hybrid CI and the normal-hearing groups. Across the six SNR conditions, listening effort improved to a greater degree for the normal-hearing group compared to the CI groups. However, there was no significant difference in listening effort between the CI groups. The subjective measures revealed significant differences between the normal-hearing and CI groups, but no difference among the three CI groups. Across all groups, age was significantly correlated with listening effort. We found no relationship between listening effort and the age at the onset of hearing loss, age at implantation, the duration of CI use, and working memory capacity for these participants.
CONCLUSIONS: Listening effort was reduced to a greater degree for the normal-hearing group compared to the CI users. There was no significant difference in listening effort among the CI groups. For the CI users in this study, age was a significant factor with regard to listening effort, whereas other variables such as the duration of CI use and the age at the onset of hearing loss were not significantly related to listening effort.

PMID: 28906240 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2vXTLl2
via IFTTT

Auditory Processing Testing: In the Booth versus Outside the Booth.

Related Articles

Auditory Processing Testing: In the Booth versus Outside the Booth.

J Am Acad Audiol. 2017 Sep;28(8):679-684

Authors: Lucker JR

Abstract
BACKGROUND: Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms.
PURPOSE: The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school.
RESEARCH DESIGN: A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations.
STUDY SAMPLE: Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced.
DATA COLLECTION AND ANALYSIS: Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth.
RESULTS: No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done, that is, in the booth or in the room.
CONCLUSIONS: Results from the present study indicate that one can obtain the same results on auditory processing tests, regardless of whether testing is completed in a soundproof booth or in a quiet test environment. Therefore, audiologists should not be required to test for auditory processing in a soundproof booth. This study shows that audiologists can conduct testing in a quiet room so long as the background noise is sufficiently controlled.

PMID: 28906239 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ydjPu9
via IFTTT

Safe Stimulus Intensities for VEMP Testing.

Related Articles

Safe Stimulus Intensities for VEMP Testing.

J Am Acad Audiol. 2017 Sep;28(8):678

Authors: Jacobson GP

PMID: 28906238 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/2ydqowA
via IFTTT

The Effect of Stimulus Variability on Learning and Generalization of Reading in a Novel Script

Purpose
The benefit of stimulus variability for generalization of acquired skills and knowledge has been shown in motor, perceptual, and language learning but has rarely been studied in reading. We studied the effect of variable training in a novel language on reading trained and untrained words.
Method
Sixty typical adults received 2 sessions of training in reading an artificial script. Participants were assigned to 1 of 3 groups: a variable training group practicing a large set of 24 words, and 2 nonvariable training groups practicing a smaller set of 12 words, with twice the number of repetitions per word.
Results
Variable training resulted in higher accuracy for both trained and untrained items composed of the same graphemes, compared to the nonvariable training. Moreover, performance on untrained items was correlated with phonemic awareness only for the nonvariable training groups.
Conclusions
High stimulus variability increases the reliance on small unit decoding in adults reading in a novel script, which is beneficial for both familiar and novel words. These results show that the statistical properties of the input during reading acquisition influence the type of acquired knowledge and have theoretical and practical implications for planning efficient reading instruction methods.
Supplemental Material
http://ift.tt/2h7vgMh

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0293/2654585/The-Effect-of-Stimulus-Variability-on-Learning-and
via IFTTT

Language Sample Analysis and Elicitation Technique Effects in Bilingual Children With and Without Language Impairment

Purpose
This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish–English-speaking children in the United States.
Method
Twenty bilingual Spanish–English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques.
Results
D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling.
Conclusion
Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish–English-speaking children.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0335/2654586/Language-Sample-Analysis-and-Elicitation-Technique
via IFTTT

Investigating the Role of Salivary Cortisol on Vocal Symptoms

Purpose
We investigated whether participants who reported more often occurring vocal symptoms showed higher salivary cortisol levels and if such possible associations were different for men and women.
Method
The participants (N = 170; men n = 49, women n = 121) consisted of a population-based sample of Finnish twins born between 1961 and 1989. The participants submitted saliva samples for hormone analysis and completed a web questionnaire including questions regarding the occurrence of 6 vocal symptoms during the past 12 months. The data were analyzed using the generalized estimated equations method.
Results
A composite variable of the vocal symptoms showed a significant positive association with salivary cortisol levels (p < .001). Three of the 6 vocal symptoms were significantly associated with the level of cortisol when analyzed separately (p values less than .05). The results showed no gender difference regarding the effect of salivary cortisol on vocal symptoms.
Conclusions
There was a positive association between the occurrence of vocal symptoms and salivary cortisol levels. Participants with higher cortisol levels reported more often occurring vocal symptoms. This could have a connection to the influence of stress on vocal symptoms because stress is a known risk factor of vocal symptoms and salivary cortisol can be seen as a biomarker for stress.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0058/2654587/Investigating-the-Role-of-Salivary-Cortisol-on
via IFTTT

Error Type and Lexical Frequency Effects: Error Detection in Swedish Children With Language Impairment

Purpose
The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies.
Method
Error sensitivity for grammatical structures vulnerable in Swedish-speaking preschool children with LI (omission of the indefinite article in a noun phrase with a neuter/common noun, and use of the infinitive instead of past-tense regular and irregular verbs) was compared to a control error (singular noun instead of plural). Target structures involved a high-frequency (HF) or a low-frequency (LF) noun/verb. Grammatical and ungrammatical sentences were presented in headphones, and responses were collected through button presses.
Results
Children with LI had similar sensitivity to the plural control error as peers with typical language development, but lower sensitivity to past-tense errors and noun phrase errors. All children showed lexical frequency effects for errors involving verbs (HF > LF), and noun gender effects for noun phrase errors (common > neuter).
Conclusions
School-age children with LI may have subtle difficulties with morphosyntactic processing that mirror expressive difficulties in preschool children with LI. Lexical frequency may affect morphosyntactic processing, which has clinical implications for assessment of grammatical knowledge.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0294/2654583/Error-Type-and-Lexical-Frequency-Effects-Error
via IFTTT

Home and Community Language Proficiency in Spanish–English Early Bilingual University Students

Purpose
This study assessed home and community language proficiency in Spanish–English bilingual university students to investigate whether the vocabulary gap reported in studies of bilingual children persists into adulthood.
Method
Sixty-five early bilinguals (mean age = 21 years) were assessed in English and Spanish vocabulary and verbal reasoning ability using subtests of the Woodcock-Muñoz Language Survey–Revised (Schrank & Woodcock, 2009). Their English scores were compared to 74 monolinguals matched in age and level of education. Participants also completed a background questionnaire.
Results
Bilinguals scored below the monolingual control group on both subtests, and the difference was larger for vocabulary compared to verbal reasoning. However, bilinguals were close to the population mean for verbal reasoning. Spanish scores were on average lower than English scores, but participants differed widely in their degree of balance. Participants with an earlier age of acquisition of English and more current exposure to English tended to be more dominant in English.
Conclusions
Vocabulary tests in the home or community language may underestimate bilingual university students' true verbal ability and should be interpreted with caution in high-stakes situations. Verbal reasoning ability may be more indicative of a bilingual's verbal ability.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0341/2654584/Home-and-Community-Language-Proficiency-in
via IFTTT

The Effect of Stimulus Variability on Learning and Generalization of Reading in a Novel Script

Purpose
The benefit of stimulus variability for generalization of acquired skills and knowledge has been shown in motor, perceptual, and language learning but has rarely been studied in reading. We studied the effect of variable training in a novel language on reading trained and untrained words.
Method
Sixty typical adults received 2 sessions of training in reading an artificial script. Participants were assigned to 1 of 3 groups: a variable training group practicing a large set of 24 words, and 2 nonvariable training groups practicing a smaller set of 12 words, with twice the number of repetitions per word.
Results
Variable training resulted in higher accuracy for both trained and untrained items composed of the same graphemes, compared to the nonvariable training. Moreover, performance on untrained items was correlated with phonemic awareness only for the nonvariable training groups.
Conclusions
High stimulus variability increases the reliance on small unit decoding in adults reading in a novel script, which is beneficial for both familiar and novel words. These results show that the statistical properties of the input during reading acquisition influence the type of acquired knowledge and have theoretical and practical implications for planning efficient reading instruction methods.
Supplemental Material
http://ift.tt/2h7vgMh

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0293/2654585/The-Effect-of-Stimulus-Variability-on-Learning-and
via IFTTT

Language Sample Analysis and Elicitation Technique Effects in Bilingual Children With and Without Language Impairment

Purpose
This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish–English-speaking children in the United States.
Method
Twenty bilingual Spanish–English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques.
Results
D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling.
Conclusion
Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish–English-speaking children.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0335/2654586/Language-Sample-Analysis-and-Elicitation-Technique
via IFTTT

Investigating the Role of Salivary Cortisol on Vocal Symptoms

Purpose
We investigated whether participants who reported more often occurring vocal symptoms showed higher salivary cortisol levels and if such possible associations were different for men and women.
Method
The participants (N = 170; men n = 49, women n = 121) consisted of a population-based sample of Finnish twins born between 1961 and 1989. The participants submitted saliva samples for hormone analysis and completed a web questionnaire including questions regarding the occurrence of 6 vocal symptoms during the past 12 months. The data were analyzed using the generalized estimated equations method.
Results
A composite variable of the vocal symptoms showed a significant positive association with salivary cortisol levels (p < .001). Three of the 6 vocal symptoms were significantly associated with the level of cortisol when analyzed separately (p values less than .05). The results showed no gender difference regarding the effect of salivary cortisol on vocal symptoms.
Conclusions
There was a positive association between the occurrence of vocal symptoms and salivary cortisol levels. Participants with higher cortisol levels reported more often occurring vocal symptoms. This could have a connection to the influence of stress on vocal symptoms because stress is a known risk factor of vocal symptoms and salivary cortisol can be seen as a biomarker for stress.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0058/2654587/Investigating-the-Role-of-Salivary-Cortisol-on
via IFTTT

Error Type and Lexical Frequency Effects: Error Detection in Swedish Children With Language Impairment

Purpose
The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies.
Method
Error sensitivity for grammatical structures vulnerable in Swedish-speaking preschool children with LI (omission of the indefinite article in a noun phrase with a neuter/common noun, and use of the infinitive instead of past-tense regular and irregular verbs) was compared to a control error (singular noun instead of plural). Target structures involved a high-frequency (HF) or a low-frequency (LF) noun/verb. Grammatical and ungrammatical sentences were presented in headphones, and responses were collected through button presses.
Results
Children with LI had similar sensitivity to the plural control error as peers with typical language development, but lower sensitivity to past-tense errors and noun phrase errors. All children showed lexical frequency effects for errors involving verbs (HF > LF), and noun gender effects for noun phrase errors (common > neuter).
Conclusions
School-age children with LI may have subtle difficulties with morphosyntactic processing that mirror expressive difficulties in preschool children with LI. Lexical frequency may affect morphosyntactic processing, which has clinical implications for assessment of grammatical knowledge.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0294/2654583/Error-Type-and-Lexical-Frequency-Effects-Error
via IFTTT

Home and Community Language Proficiency in Spanish–English Early Bilingual University Students

Purpose
This study assessed home and community language proficiency in Spanish–English bilingual university students to investigate whether the vocabulary gap reported in studies of bilingual children persists into adulthood.
Method
Sixty-five early bilinguals (mean age = 21 years) were assessed in English and Spanish vocabulary and verbal reasoning ability using subtests of the Woodcock-Muñoz Language Survey–Revised (Schrank & Woodcock, 2009). Their English scores were compared to 74 monolinguals matched in age and level of education. Participants also completed a background questionnaire.
Results
Bilinguals scored below the monolingual control group on both subtests, and the difference was larger for vocabulary compared to verbal reasoning. However, bilinguals were close to the population mean for verbal reasoning. Spanish scores were on average lower than English scores, but participants differed widely in their degree of balance. Participants with an earlier age of acquisition of English and more current exposure to English tended to be more dominant in English.
Conclusions
Vocabulary tests in the home or community language may underestimate bilingual university students' true verbal ability and should be interpreted with caution in high-stakes situations. Verbal reasoning ability may be more indicative of a bilingual's verbal ability.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0341/2654584/Home-and-Community-Language-Proficiency-in
via IFTTT

The Effect of Stimulus Variability on Learning and Generalization of Reading in a Novel Script

Purpose
The benefit of stimulus variability for generalization of acquired skills and knowledge has been shown in motor, perceptual, and language learning but has rarely been studied in reading. We studied the effect of variable training in a novel language on reading trained and untrained words.
Method
Sixty typical adults received 2 sessions of training in reading an artificial script. Participants were assigned to 1 of 3 groups: a variable training group practicing a large set of 24 words, and 2 nonvariable training groups practicing a smaller set of 12 words, with twice the number of repetitions per word.
Results
Variable training resulted in higher accuracy for both trained and untrained items composed of the same graphemes, compared to the nonvariable training. Moreover, performance on untrained items was correlated with phonemic awareness only for the nonvariable training groups.
Conclusions
High stimulus variability increases the reliance on small unit decoding in adults reading in a novel script, which is beneficial for both familiar and novel words. These results show that the statistical properties of the input during reading acquisition influence the type of acquired knowledge and have theoretical and practical implications for planning efficient reading instruction methods.
Supplemental Material
http://ift.tt/2h7vgMh

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0293/2654585/The-Effect-of-Stimulus-Variability-on-Learning-and
via IFTTT

Language Sample Analysis and Elicitation Technique Effects in Bilingual Children With and Without Language Impairment

Purpose
This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish–English-speaking children in the United States.
Method
Twenty bilingual Spanish–English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques.
Results
D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling.
Conclusion
Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish–English-speaking children.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0335/2654586/Language-Sample-Analysis-and-Elicitation-Technique
via IFTTT

Investigating the Role of Salivary Cortisol on Vocal Symptoms

Purpose
We investigated whether participants who reported more often occurring vocal symptoms showed higher salivary cortisol levels and if such possible associations were different for men and women.
Method
The participants (N = 170; men n = 49, women n = 121) consisted of a population-based sample of Finnish twins born between 1961 and 1989. The participants submitted saliva samples for hormone analysis and completed a web questionnaire including questions regarding the occurrence of 6 vocal symptoms during the past 12 months. The data were analyzed using the generalized estimated equations method.
Results
A composite variable of the vocal symptoms showed a significant positive association with salivary cortisol levels (p < .001). Three of the 6 vocal symptoms were significantly associated with the level of cortisol when analyzed separately (p values less than .05). The results showed no gender difference regarding the effect of salivary cortisol on vocal symptoms.
Conclusions
There was a positive association between the occurrence of vocal symptoms and salivary cortisol levels. Participants with higher cortisol levels reported more often occurring vocal symptoms. This could have a connection to the influence of stress on vocal symptoms because stress is a known risk factor of vocal symptoms and salivary cortisol can be seen as a biomarker for stress.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-S-16-0058/2654587/Investigating-the-Role-of-Salivary-Cortisol-on
via IFTTT

Error Type and Lexical Frequency Effects: Error Detection in Swedish Children With Language Impairment

Purpose
The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies.
Method
Error sensitivity for grammatical structures vulnerable in Swedish-speaking preschool children with LI (omission of the indefinite article in a noun phrase with a neuter/common noun, and use of the infinitive instead of past-tense regular and irregular verbs) was compared to a control error (singular noun instead of plural). Target structures involved a high-frequency (HF) or a low-frequency (LF) noun/verb. Grammatical and ungrammatical sentences were presented in headphones, and responses were collected through button presses.
Results
Children with LI had similar sensitivity to the plural control error as peers with typical language development, but lower sensitivity to past-tense errors and noun phrase errors. All children showed lexical frequency effects for errors involving verbs (HF > LF), and noun gender effects for noun phrase errors (common > neuter).
Conclusions
School-age children with LI may have subtle difficulties with morphosyntactic processing that mirror expressive difficulties in preschool children with LI. Lexical frequency may affect morphosyntactic processing, which has clinical implications for assessment of grammatical knowledge.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0294/2654583/Error-Type-and-Lexical-Frequency-Effects-Error
via IFTTT

Home and Community Language Proficiency in Spanish–English Early Bilingual University Students

Purpose
This study assessed home and community language proficiency in Spanish–English bilingual university students to investigate whether the vocabulary gap reported in studies of bilingual children persists into adulthood.
Method
Sixty-five early bilinguals (mean age = 21 years) were assessed in English and Spanish vocabulary and verbal reasoning ability using subtests of the Woodcock-Muñoz Language Survey–Revised (Schrank & Woodcock, 2009). Their English scores were compared to 74 monolinguals matched in age and level of education. Participants also completed a background questionnaire.
Results
Bilinguals scored below the monolingual control group on both subtests, and the difference was larger for vocabulary compared to verbal reasoning. However, bilinguals were close to the population mean for verbal reasoning. Spanish scores were on average lower than English scores, but participants differed widely in their degree of balance. Participants with an earlier age of acquisition of English and more current exposure to English tended to be more dominant in English.
Conclusions
Vocabulary tests in the home or community language may underestimate bilingual university students' true verbal ability and should be interpreted with caution in high-stakes situations. Verbal reasoning ability may be more indicative of a bilingual's verbal ability.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_JSLHR-L-16-0341/2654584/Home-and-Community-Language-Proficiency-in
via IFTTT

Estimating Nonorganic Hearing Thresholds Using Binaural Auditory Stimuli

Purpose
Minimum contralateral interference levels (MCILs) are used to estimate true hearing thresholds in individuals with unilateral nonorganic hearing loss. In this study, we determined MCILs and examined the correspondence of MCILs to true hearing thresholds to quantify the accuracy of this procedure.
Method
Sixteen adults with normal hearing participated. Subjects were asked to feign a unilateral hearing loss at 1.0, 2.0, and 4.0 kHz. MCILs were determined. Subjects also made lateralization judgments for simultaneously presented tones with varying interaural intensity differences.
Results
The 90% confidence intervals, calculated for the distributions, indicate that the MCIL in 90% of cases would be expected to be very close to threshold to approximately 17–19 dB poorer than the true hearing threshold. How close the MCIL is to true threshold appears to be based on the individual's response criterion.
Conclusions
Response bias influences the MCIL and how close an MCIL is to true hearing threshold. The clinician can never know a client's response bias and therefore should use a 90% confidence interval to predict the range for the expected true threshold. On the basis of this approach, a clinician may assume that true threshold is at or as much as 19 dB better than MCIL.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0096/2654582/Estimating-Nonorganic-Hearing-Thresholds-Using
via IFTTT

Estimating Nonorganic Hearing Thresholds Using Binaural Auditory Stimuli

Purpose
Minimum contralateral interference levels (MCILs) are used to estimate true hearing thresholds in individuals with unilateral nonorganic hearing loss. In this study, we determined MCILs and examined the correspondence of MCILs to true hearing thresholds to quantify the accuracy of this procedure.
Method
Sixteen adults with normal hearing participated. Subjects were asked to feign a unilateral hearing loss at 1.0, 2.0, and 4.0 kHz. MCILs were determined. Subjects also made lateralization judgments for simultaneously presented tones with varying interaural intensity differences.
Results
The 90% confidence intervals, calculated for the distributions, indicate that the MCIL in 90% of cases would be expected to be very close to threshold to approximately 17–19 dB poorer than the true hearing threshold. How close the MCIL is to true threshold appears to be based on the individual's response criterion.
Conclusions
Response bias influences the MCIL and how close an MCIL is to true hearing threshold. The clinician can never know a client's response bias and therefore should use a 90% confidence interval to predict the range for the expected true threshold. On the basis of this approach, a clinician may assume that true threshold is at or as much as 19 dB better than MCIL.

from #Audiology via xlomafota13 on Inoreader http://article/doi/10.1044/2017_AJA-16-0096/2654582/Estimating-Nonorganic-Hearing-Thresholds-Using
via IFTTT

Estimating Nonorganic Hearing Thresholds Using Binaural Auditory Stimuli

Purpose
Minimum contralateral interference levels (MCILs) are used to estimate true hearing thresholds in individuals with unilateral nonorganic hearing loss. In this study, we determined MCILs and examined the correspondence of MCILs to true hearing thresholds to quantify the accuracy of this procedure.
Method
Sixteen adults with normal hearing participated. Subjects were asked to feign a unilateral hearing loss at 1.0, 2.0, and 4.0 kHz. MCILs were determined. Subjects also made lateralization judgments for simultaneously presented tones with varying interaural intensity differences.
Results
The 90% confidence intervals, calculated for the distributions, indicate that the MCIL in 90% of cases would be expected to be very close to threshold to approximately 17–19 dB poorer than the true hearing threshold. How close the MCIL is to true threshold appears to be based on the individual's response criterion.
Conclusions
Response bias influences the MCIL and how close an MCIL is to true hearing threshold. The clinician can never know a client's response bias and therefore should use a 90% confidence interval to predict the range for the expected true threshold. On the basis of this approach, a clinician may assume that true threshold is at or as much as 19 dB better than MCIL.

from #Audiology via ola Kala on Inoreader http://article/doi/10.1044/2017_AJA-16-0096/2654582/Estimating-Nonorganic-Hearing-Thresholds-Using
via IFTTT

A simple model of the inner-hair-cell ribbon synapse accounts for mammalian auditory-nerve-fiber spontaneous spike times

Publication date: Available online 15 September 2017
Source:Hearing Research
Author(s): Adam J. Peterson, Peter Heil
The initial neural encoding of acoustic information occurs by means of spikes in primary auditory afferents. Each mammalian primary auditory afferent (type-I auditory-nerve fiber; ANF) is associated with only one ribbon synapse in one receptor cell (inner hair cell; IHC). The properties of ANF spike trains therefore provide an indirect view of the operation of individual IHC synapses. We showed previously that a point process model of presynaptic vesicle pool depletion and deterministic exponential replenishment, combined with short postsynaptic neural refractoriness, accounts for the interspike interval (ISI) distributions, serial ISI correlations, and spike-count statistics of a population of cat-ANF spontaneous spike trains. Here, we demonstrate that this previous synapse model produces unrealistic properties when spike rates are high and show that this problem can be resolved if the replenishment of each release site is stochastic and independent. We assume that the depletion probability varies between synapses to produce differences in spontaneous rate and that the other model parameters are constant across synapses. We find that this model fits best with only four release sites per IHC synapse, a mean replenishment time of 17 ms, and absolute and mean relative refractory periods of 0.6 ms each. This model accounts for ANF spontaneous spike timing better than two influential, comprehensive models of the auditory periphery. It also reproduces ISI distributions from spontaneous (and steady-state driven) activity from other studies and other mammalian species. Adding fractal noise to the rate of depletion of each release site can yield long-range correlations as typically observed in long spike trains. We also examine two model variants having more complex vesicle cycles, but neither variant yields a markedly improved fit or a different estimate of the number of release sites. In addition, we examine a model variant having both short and long relative refractory components and find that it cannot account for all aspects of the data. These model results will be beneficial for understanding ANF responses to acoustic stimulation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fos92n
via IFTTT

A simple model of the inner-hair-cell ribbon synapse accounts for mammalian auditory-nerve-fiber spontaneous spike times

Publication date: Available online 15 September 2017
Source:Hearing Research
Author(s): Adam J. Peterson, Peter Heil
The initial neural encoding of acoustic information occurs by means of spikes in primary auditory afferents. Each mammalian primary auditory afferent (type-I auditory-nerve fiber; ANF) is associated with only one ribbon synapse in one receptor cell (inner hair cell; IHC). The properties of ANF spike trains therefore provide an indirect view of the operation of individual IHC synapses. We showed previously that a point process model of presynaptic vesicle pool depletion and deterministic exponential replenishment, combined with short postsynaptic neural refractoriness, accounts for the interspike interval (ISI) distributions, serial ISI correlations, and spike-count statistics of a population of cat-ANF spontaneous spike trains. Here, we demonstrate that this previous synapse model produces unrealistic properties when spike rates are high and show that this problem can be resolved if the replenishment of each release site is stochastic and independent. We assume that the depletion probability varies between synapses to produce differences in spontaneous rate and that the other model parameters are constant across synapses. We find that this model fits best with only four release sites per IHC synapse, a mean replenishment time of 17 ms, and absolute and mean relative refractory periods of 0.6 ms each. This model accounts for ANF spontaneous spike timing better than two influential, comprehensive models of the auditory periphery. It also reproduces ISI distributions from spontaneous (and steady-state driven) activity from other studies and other mammalian species. Adding fractal noise to the rate of depletion of each release site can yield long-range correlations as typically observed in long spike trains. We also examine two model variants having more complex vesicle cycles, but neither variant yields a markedly improved fit or a different estimate of the number of release sites. In addition, we examine a model variant having both short and long relative refractory components and find that it cannot account for all aspects of the data. These model results will be beneficial for understanding ANF responses to acoustic stimulation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fos92n
via IFTTT

Airflow Error Measurement Due to Pneumotachograph Mask Rim Leaks

S08921997.gif

Publication date: Available online 14 September 2017
Source:Journal of Voice
Author(s): Nicholas A. May, Ronald C. Scherer
Airflow during speech production is recorded using a pneumotachograph system wherein typically a mask is placed upon the face. Accurate measures of airflow require mask calibration and a complete seal of the mask rim to the face. Literature frequently cites mask rim leaks as causing flow measurement inaccuracies, but quantitative studies of the inaccuracies are needed.The purpose of this study was to determine the degree of inaccuracy of flow measurement using a Glottal Enterprises aerodynamic system for a variety of mask rim leak conditions. Air was pushed and pulled through the Glottal Enterprises mask system over a wide range of airflow with leaks simulated by small metal tubes of various cross-sectional areas placed between the mask rim and a face-like calibration mold. Two leak locations, single versus multiple leaks, and two different leak tube geometries were used.Results suggest that (1) as leak area increases, the amount of leak flow increases; (2) the amount of flow leak is relatively independent of location; (3) given equivalent leak areas, multiple leak locations provide less leak flow; and (4) quasi-elliptical tubes were more resistive to airflow than rectangular tubes.A general empirical equation was obtained that relates the leak flow between the mask rim and the face, the size of the leak region, and the amount of the upstream airflow toward the mask: Leak(cm3/s) = 0.33 × Area(cm2) × Flow(cm3/s) for the range of ±2000 cm3/s. This equation may provide researchers and clinicians with a tool for generalizing airflow leak effects.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/2wfJV2i
via IFTTT

A simple model of the inner-hair-cell ribbon synapse accounts for mammalian auditory-nerve-fiber spontaneous spike times

alertIcon.gif

Publication date: Available online 15 September 2017
Source:Hearing Research
Author(s): Adam J. Peterson, Peter Heil
The initial neural encoding of acoustic information occurs by means of spikes in primary auditory afferents. Each mammalian primary auditory afferent (type-I auditory-nerve fiber; ANF) is associated with only one ribbon synapse in one receptor cell (inner hair cell; IHC). The properties of ANF spike trains therefore provide an indirect view of the operation of individual IHC synapses. We showed previously that a point process model of presynaptic vesicle pool depletion and deterministic exponential replenishment, combined with short postsynaptic neural refractoriness, accounts for the interspike interval (ISI) distributions, serial ISI correlations, and spike-count statistics of a population of cat-ANF spontaneous spike trains. Here, we demonstrate that this previous synapse model produces unrealistic properties when spike rates are high and show that this problem can be resolved if the replenishment of each release site is stochastic and independent. We assume that the depletion probability varies between synapses to produce differences in spontaneous rate and that the other model parameters are constant across synapses. We find that this model fits best with only four release sites per IHC synapse, a mean replenishment time of 17 ms, and absolute and mean relative refractory periods of 0.6 ms each. This model accounts for ANF spontaneous spike timing better than two influential, comprehensive models of the auditory periphery. It also reproduces ISI distributions from spontaneous (and steady-state driven) activity from other studies and other mammalian species. Adding fractal noise to the rate of depletion of each release site can yield long-range correlations as typically observed in long spike trains. We also examine two model variants having more complex vesicle cycles, but neither variant yields a markedly improved fit or a different estimate of the number of release sites. In addition, we examine a model variant having both short and long relative refractory components and find that it cannot account for all aspects of the data. These model results will be beneficial for understanding ANF responses to acoustic stimulation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fos92n
via IFTTT

A simple model of the inner-hair-cell ribbon synapse accounts for mammalian auditory-nerve-fiber spontaneous spike times

alertIcon.gif

Publication date: Available online 15 September 2017
Source:Hearing Research
Author(s): Adam J. Peterson, Peter Heil
The initial neural encoding of acoustic information occurs by means of spikes in primary auditory afferents. Each mammalian primary auditory afferent (type-I auditory-nerve fiber; ANF) is associated with only one ribbon synapse in one receptor cell (inner hair cell; IHC). The properties of ANF spike trains therefore provide an indirect view of the operation of individual IHC synapses. We showed previously that a point process model of presynaptic vesicle pool depletion and deterministic exponential replenishment, combined with short postsynaptic neural refractoriness, accounts for the interspike interval (ISI) distributions, serial ISI correlations, and spike-count statistics of a population of cat-ANF spontaneous spike trains. Here, we demonstrate that this previous synapse model produces unrealistic properties when spike rates are high and show that this problem can be resolved if the replenishment of each release site is stochastic and independent. We assume that the depletion probability varies between synapses to produce differences in spontaneous rate and that the other model parameters are constant across synapses. We find that this model fits best with only four release sites per IHC synapse, a mean replenishment time of 17 ms, and absolute and mean relative refractory periods of 0.6 ms each. This model accounts for ANF spontaneous spike timing better than two influential, comprehensive models of the auditory periphery. It also reproduces ISI distributions from spontaneous (and steady-state driven) activity from other studies and other mammalian species. Adding fractal noise to the rate of depletion of each release site can yield long-range correlations as typically observed in long spike trains. We also examine two model variants having more complex vesicle cycles, but neither variant yields a markedly improved fit or a different estimate of the number of release sites. In addition, we examine a model variant having both short and long relative refractory components and find that it cannot account for all aspects of the data. These model results will be beneficial for understanding ANF responses to acoustic stimulation.



from #Audiology via ola Kala on Inoreader http://ift.tt/2fos92n
via IFTTT