Τετάρτη 19 Σεπτεμβρίου 2018

Pairing New Words With Unfamiliar Objects: Comparing Children With and Without Cochlear Implants

Purpose
This study investigates differences between preschool children with cochlear implants and age-matched children with normal hearing during an initial stage in word learning to evaluate whether they (a) match novel words to unfamiliar objects and (b) solicit information about unfamiliar objects during play.
Method
Twelve preschool children with cochlear implants and 12 children with normal hearing matched for age completed 2 experimental tasks. In the 1st task, children were asked to point to a picture that matched either a known word or a novel word. In the 2nd task, children were presented with unfamiliar objects during play and were given the opportunity to ask questions about those objects.
Results
In Task 1, children with cochlear implants paired novel words with unfamiliar pictures in fewer trials than children with normal hearing. In Task 2, children with cochlear implants were less likely to solicit information about new objects than children with normal hearing. Performance on the 1st task, but not the 2nd, significantly correlated with expressive vocabulary standard scores of children with cochlear implants.
Conclusion
This study provides preliminary evidence that children with cochlear implants approach mapping novel words to and soliciting information about unfamiliar objects differently than children with normal hearing.

from #Audiology via ola Kala on Inoreader https://ift.tt/2ND1pu0
via IFTTT

Reliability of Measures of N1 Peak Amplitude of the Compound Action Potential in Younger and Older Adults

Purpose
Human auditory nerve (AN) activity estimated from the amplitude of the first prominent negative peak (N1) of the compound action potential (CAP) is typically quantified using either a peak-to-peak measurement or a baseline-corrected measurement. However, the reliability of these 2 common measurement techniques has not been evaluated but is often assumed to be relatively poor, especially for older adults. To address this question, the current study (a) compared test–retest reliability of these 2 methods and (b) tested the extent to which measurement type affected the relationship between N1 amplitude and experimental factors related to the stimulus (higher and lower intensity levels) and participants (younger and older adults).
Method
Click-evoked CAPs were recorded in 24 younger (aged 18–30 years) and 20 older (aged 55–85 years) adults with clinically normal audiograms up to 3000 Hz. N1 peak amplitudes were estimated from peak-to-peak measurements (from N1 to P1) and baseline-corrected measurements for 2 stimulus levels (80 and 110 dB pSPL). Baseline-corrected measurements were made with 4 baseline windows. Each stimulus level was presented twice, and test–retest reliability of these 2 measures was assessed using the intraclass correlation coefficient. Linear mixed models were used to evaluate the extent to which age group and click level uniquely predicted N1 amplitude and whether the predictive relationships differed between N1 measurement techniques.
Results
Both peak-to-peak and baseline-corrected measurements of N1 amplitude were found to have good-to-excellent reliability, with intraclass correlation coefficient values > 0.60. As expected, N1 amplitudes were significantly larger for younger participants compared with older participants for both measurement types and were significantly larger in response to clicks presented at 110 dB pSPL than at 80 dB pSPL for both measurement types. Furthermore, the choice of baseline window had no significant effect on N1 amplitudes using the baseline-corrected method.
Conclusions
Our results suggest that measurements of AN activity can be robustly and reliably recorded in both younger and older adults using either peak-to-peak or baseline-corrected measurements of the N1 of the CAP. Peak-to-peak measurements yield larger N1 response amplitudes and are the default measurement type for many clinical systems, whereas baseline-corrected measurements are computationally simpler. Furthermore, the relationships between AN activity and stimulus- and participant-related variables were not affected by measurement technique, which suggests that these relationships can be compared across studies using different techniques for measuring the CAP N1.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NGIP7K
via IFTTT

Perceived Voice Quality and Voice-Related Problems Among Older Adults With Hearing Impairments

The auditory system helps regulate phonation. A speaker's perception of their own voice is likely to be of both emotional and functional significance. Although many investigations have observed deviating voice qualities in individuals who are prelingually deaf or profoundly hearing impaired, less is known regarding how older adults with acquired hearing impairments perceive their own voice and potential voice problems.
Purpose
The purpose of this study was to investigate problems relating to phonation and self-perceived voice sound quality in older adults based on hearing ability and the use of hearing aids.
Method
This was a cross-sectional study, with 290 participants divided into 3 groups (matched by age and gender): (a) individuals with hearing impairments who did not use hearing aids (n = 110), (b) individuals with hearing impairments who did use hearing aids (n = 110), and (c) individuals with no hearing impairments (n = 70). All participants underwent a pure-tone audiometry exam; completed standardized questionnaires regarding their hearing, voice, and general health; and were recorded speaking in a soundproof room.
Results
The hearing aid users surpassed the benchmarks for having a voice disorder on the Voice Handicap Index (VHI; Jacobson et al., 1997) at almost double the rate predicted by the Swedish normative values for their age range, although there was no significant difference in acoustical measures between any of the groups. Both groups with hearing impairments scored significantly higher on the VHI than the control group, indicating more impairment. It remains inconclusive how much hearing loss versus hearing aids separately contribute to the difference in voice problems. The total scores on the Hearing Handicap Inventory for the Elderly (Ventry & Weinstein, 1982), in combination with the variables gender and age, explained 21.9% of the variance on the VHI. Perceiving one's own voice as being distorted, dull, or hollow had a strong negative association with a general satisfaction about the sound quality of one's own voice. In addition, groupwise differences in own-voice descriptions suggest that a negative perception of one's voice could be influenced by alterations caused by hearing aid processing.
Conclusions
The results indicate that hearing impairments and hearing aids affect several aspects of vocal satisfaction in older adults. A greater understanding of how hearing impairments and hearing aids relate to voice problems may contribute to better voice and hearing care.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NrMGlu
via IFTTT

Perceptual Encoding in Auditory Brainstem Responses: Effects of Stimulus Frequency

Purpose
A central question about auditory perception concerns how acoustic information is represented at different stages of processing. The auditory brainstem response (ABR) provides a potentially useful index of the earliest stages of this process. However, it is unclear how basic acoustic characteristics (e.g., differences in tones spanning a wide range of frequencies) are indexed by ABR components. This study addresses this by investigating how ABR amplitude and latency track stimulus frequency for tones ranging from 250 to 8000 Hz.
Method
In a repeated-measures experimental design, listeners were presented with brief tones (250, 500, 1000, 2000, 4000, and 8000 Hz) in random order while electroencephalography was recorded. ABR latencies and amplitudes for Wave V (6–9 ms) and in the time window following the Wave V peak (labeled as Wave VI; 9–12 ms) were measured.
Results
Wave V latency decreased with increasing frequency, replicating previous work. In addition, Waves V and VI amplitudes tracked differences in tone frequency, with a nonlinear response from 250 to 8000 Hz and a clear log-linear response to tones from 500 to 8000 Hz.
Conclusions
Results demonstrate that the ABR provides a useful measure of early perceptual encoding for stimuli varying in frequency and that the tonotopic organization of the auditory system is preserved at this stage of processing for stimuli from 500 to 8000 Hz. Such a measure may serve as a useful clinical tool for evaluating a listener's ability to encode specific frequencies in sounds.
Supplemental Material
https://doi.org/10.23641/asha.6987422

from #Audiology via ola Kala on Inoreader https://ift.tt/2wL00Ly
via IFTTT

Exploring the Effects of Imitating Hand Gestures and Head Nods on L1 and L2 Mandarin Tone Production

Purpose
This study investigated the impact of metaphoric actions—head nods and hand gestures—in producing Mandarin tones for first language (L1) and second language (L2) speakers.
Method
In 2 experiments, participants imitated videos of Mandarin tones produced under 3 conditions: (a) speech alone, (b) speech + head nods, and (c) speech + hand gestures. Fundamental frequency was recorded for both L1 (Experiment 1) and L2 (Experiment 2a) speakers, and the output of the L2 speakers was rated for tonal accuracy by 7 native Mandarin judges (Experiment 2b).
Results
Experiment 1 showed that 12 L1 speakers' fundamental frequency spectral data did not differ among the 3 conditions. In Experiment 2a, the conditions did not affect the production of 24 English speakers for the most part, but there was some evidence that hand gestures helped Tone 4. In Experiment 2b, native Mandarin judges found limited conditional differences in L2 productions, with Tone 3 showing a slight head nods benefit in a subset of “correct” L2 tokens.
Conclusion
Results suggest that metaphoric bodily actions do not influence the lowest levels of L1 speech production in a tonal language and may play a very modest role during preliminary L2 learning.

from #Audiology via ola Kala on Inoreader https://ift.tt/2LWYoTO
via IFTTT

The Effects of Syntactic Complexity and Sentence Length on the Speech Motor Control of School-Age Children Who Stutter

Purpose
Early childhood stuttering is associated with atypical speech motor development. Compared with children who do not stutter (CWNS), the speech motor systems of school-age children who stutter (CWS) may also be particularly susceptible to breakdown under increased processing demands. The effects of increased syntactic complexity and sentence length on articulatory coordination were investigated.
Method
Kinematic, temporal, and behavioral indices of articulatory coordination were quantified for school-age CWS (n = 19) and CWNS (n = 18). Participants produced 4 sentences varying in syntactic complexity (simple declarative/complex declarative with a relative clause) and sentence length (short/long). Lip aperture variability (LAVar) served as a kinematic measure of interarticulatory consistency over repeated productions. Articulation rate (syllables per second) was also calculated as a related temporal measure. Finally, we computed accuracy and stuttering frequency percentages for each sentence to assess task performance.
Results
Increased sentence length, but not syntactic complexity, increased LAVar in both groups. This effect was disproportionately greater for CWS compared with CWNS. No group differences were observed for articulation rate. CWS were also less accurate in their sentence productions than fluent peers and exhibited more instances of stuttering when processing demands associated with length and syntactic complexity increases.
Conclusions
The speech motor systems of school-age CWS appear to be particularly vulnerable to processing demands associated with increased sentence length, as evidenced by increased LAVar. Increasing the length and complexity of the sentence stimuli also resulted in reduced production accuracy and increased stuttering frequency. We discuss these findings within a motor control framework of speech production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2L4rXSS
via IFTTT

A Simple Method to Obtain Basic Acoustic Measures From Video Recordings as Subtitles

Purpose
Sound pressure level (SPL) and fundamental frequency (fo) are very basic and important measures in the acoustical assessment of voice quality, and their variation influences also the vocal fold vibration characteristics. Most sophisticated laryngeal videostroboscopic systems therefore also measure and display the SPL and fo values directly over the video frames by means of a rather expensive special hardware setup. An alternative simple software-based method is presented here to obtain these measures as video subtitles.
Method
The software extracts acoustic data from the video recording, calculates the SPL and fo parameters, and saves their values in a separate subtitle file. To ensure the correct SPL values, the microphone signal is calibrated beforehand with a sound level meter.
Results
The new approach was tested on videokymographic recordings obtained laryngoscopically. The results of SPL and fo values calculated from the videokymographic recording, subtitles creation, and their display are presented.
Conclusions
This method is useful in integrating the acoustic measures with any kind of video recordings containing audio data when inbuilt hardware means are not available. However, calibration and other technical aspects related to data acquisition and synchronization described in this article should be properly taken care of during the recording.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NtdphE
via IFTTT

The Impact of Exposure With No Training: Implications for Future Partner Training Research

Purpose
This research note reports on an unexpected negative finding related to behavior change in a controlled trial designed to test whether partner training improves the conversational skills of volunteers.
Method
The clinical trial involving training in “Supported Conversation for Adults with Aphasia” utilized a single-blind, randomized, controlled, pre–post design. Eighty participants making up 40 dyads of a volunteer conversation partner and an adult with aphasia were randomly allocated to either an experimental or control group of 20 dyads each. Descriptive statistics including exact 95% confidence intervals were calculated for the percentage of control group participants who got worse after exposure to individuals with aphasia.
Results
Positive outcomes of training in Supported Conversation for Adults with Aphasia for both the trained volunteers and their partners with aphasia were reported by Kagan, Black, Felson Duchan, Simmons-Mackie, and Square in 2001. However, post hoc data analysis revealed that almost one third of untrained control participants had a negative outcome rather than the anticipated neutral or slightly positive outcome.
Conclusions
If the results of this small study are in any way representative of what happens in real life, communication partner training in aphasia becomes even more important than indicated from the positive results of training studies. That is, it is possible that mere exposure to a communication disability such as aphasia could have negative impacts on communication and social interaction. This may be akin to what is known as a “nocebo” effect—something for partner training studies in aphasia to take into account.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NFtKTA
via IFTTT

A Phonetic Complexity-Based Approach for Intelligibility and Articulatory Precision Testing: A Preliminary Study on Talkers With Amyotrophic Lateral Sclerosis

Purpose
This study describes a phonetic complexity-based approach for speech intelligibility and articulatory precision testing using preliminary data from talkers with amyotrophic lateral sclerosis.
Method
Eight talkers with amyotrophic lateral sclerosis and 8 healthy controls produced a list of 16 low and high complexity words. Sixty-four listeners judged the samples for intelligibility, and 2 trained listeners completed phoneme-level analysis to determine articulatory precision. To estimate percent intelligibility, listeners orthographically transcribed each word, and the transcriptions were scored as being either accurate or inaccurate. Percent articulatory precision was calculated based on the experienced listeners' judgments of phoneme distortions, deletions, additions, and/or substitutions for each word. Articulation errors were weighted based on the perceived impact on intelligibility to determine word-level precision.
Results
Between-groups differences in word intelligibility and articulatory precision were significant at lower levels of phonetic complexity as dysarthria severity increased. Specifically, more severely impaired talkers showed significant reductions in word intelligibility and precision at both complexity levels, whereas those with milder speech impairments displayed intelligibility reductions only for more complex words. Articulatory precision was less sensitive to mild dysarthria compared to speech intelligibility for the proposed complexity-based approach.
Conclusions
Considering phonetic complexity for dysarthria tests could result in more sensitive assessments for detecting and monitoring dysarthria progression.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MnQUJu
via IFTTT

Diagnosing Middle Ear Pathology in 6- to 9-Month-Old Infants Using Wideband Absorbance: A Risk Prediction Model

Purpose
The aim of this study was to develop a risk prediction model for detecting middle ear pathology in 6- to 9-month-old infants using wideband absorbance measures.
Method
Two hundred forty-nine infants aged 23–39 weeks (Mdn = 28 weeks) participated in the study. Distortion product otoacoustic emissions and high-frequency tympanometry were tested in both ears of each infant to assess middle ear function. Wideband absorbance was measured at ambient pressure in each participant from 226 to 8000 Hz. Absorbance results from 1 ear of each infant were used to predict middle ear dysfunction, using logistic regression. To develop a model likely to generalize to new infants, the number of variables was reduced using principal component analysis, and a penalty was applied when fitting the model. The model was validated using the opposite ears and with bootstrap resampling. Model performance was evaluated through measures of discrimination and calibration. Discrimination was assessed with the area under the receiver operating characteristic curve (AUC); and calibration, with calibration curves, which plotted actual against predicted probabilities.
Results
AUC of the fitted model was 0.887. The model validated adequately when applied to the opposite ears (AUC = 0.852) and with bootstrap resampling (AUC = 0.874). Calibration was satisfactory, with high agreement between predictions and observed results.
Conclusions
The risk prediction model had accurate discrimination and satisfactory calibration. Validation results indicate that it may generalize well to new infants. The model could potentially be used in diagnostic and screening settings. In the context of screening, probabilities provide an intuitive and flexible mechanism for setting the referral threshold that is sensitive to the costs associated with true and false-positive outcomes. In a diagnostic setting, predictions could be used to supplement visual inspection of absorbance for individualized diagnoses. Further research assessing the performance and impact of the model in these contexts is warranted.

from #Audiology via ola Kala on Inoreader https://ift.tt/2CxXxcI
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2BcbUTf
via IFTTT

Pure-Tone Frequency Discrimination in Preschoolers, Young School-Age Children, and Adults

Purpose
Published data indicate nearly adultlike frequency discrimination in infants but large child–adult differences for school-age children. This study evaluated the role that differences in measurement procedures and stimuli may have played in the apparent nonmonotonicity. Frequency discrimination was assessed in preschoolers, young school-age children, and adults using stimuli and procedures that have previously been used to test infants.
Method
Listeners were preschoolers (3–4 years), young school-age children (5–6 years), and adults (19–38 years). Performance was assessed using a single-interval, observer-based method and a continuous train of stimuli, similar to that previously used to evaluate infants. Testing was completed using 500- and 5000-Hz standard tones, fixed within a set of trials. Thresholds for frequency discrimination were obtained using an adaptive, two-down one-up procedure. Adults and most school-age children responded by raising their hands. An observer-based, conditioned-play response was used to test preschoolers and those school-age children for whom the hand-raise procedure was not effective for conditioning.
Results
Results suggest an effect of age and frequency on thresholds but no interaction between these 2 factors. A lower proportion of preschoolers completed training compared with young school-age children. For those children who completed training, however, thresholds did not improve significantly with age; both groups of children performed more poorly than adults. Performance was better for the 500-Hz standard frequency compared with the 5000-Hz standard frequency.
Conclusions
Thresholds for school-age children were broadly similar to those previously observed using a forced-choice procedure. Although there was a trend for improved performance with increasing age, no significant age effect was observed between preschoolers and school-age children. The practice of excluding participants based on failure to meet conditioning criteria in an observer-based task could contribute to the relatively good performance observed for preschoolers in this study and the adultlike performance previously observed in infants.

from #Audiology via ola Kala on Inoreader https://ift.tt/2wqgl8d
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2Cph6Uo
via IFTTT

Morphosyntax Production of Preschool Children With Hearing Loss: An Evaluation of the Extended Optional Infinitive and Surface Accounts

Purpose
The first aim of this study was to explore differences in profiles of morphosyntax production of preschool children with hearing loss (CHL) relative to age- and language-matched comparison groups. The second aim was to explore the potential of extending 2 long-standing theoretical accounts of morphosyntax weakness in children with specific language impairment to preschool CHL.
Method
This study examined conversational language samples to describe the accuracy and type of inaccurate productions of Brown's grammatical morphemes in 18 preschool CHL as compared with an age-matched group (±3 months, n = 18) and a language-matched group (±1 raw score point on an expressive language subtest, n = 18). Age ranged from 45 to 62 months. Performance across groups was compared. In addition, production accuracy of CHL on morphemes that varied by tense and duration was compared to assess the validity of extending theoretical accounts of children with specific language impairment to CHL.
Results
CHL exhibited particular difficulty with morphosyntax relative to other aspects of language. In addition, differences across groups on accuracy and type of inaccurate productions were observed. Finally, a unified approach to explaining morphosyntax weakness in CHL was more appropriate than a linguistic- or perceptual-only approach.
Conclusions
Taken together, the findings of this study support a unified theoretical account of morphosyntax weakness in CHL in which both tense and duration of morphemes play a role in morphosyntax production accuracy, with a more robust role for tense than duration.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MlUkMX
via IFTTT

Verbal Agreement Inflection in German Children With Down Syndrome

Purpose
The study aims to explore whether finite verbal morphology is affected in children/adolescents with Down syndrome (DS), whether observed deficits in this domain are indicative of a delayed or deviant development, and whether they are due to phonetic/phonological problems or deficits in phonological short-term memory.
Method
An elicitation task on subject–verb agreement, a picture-naming task targeting stem-final consonants that also express verbal agreement, a nonword repetition task, and a test on grammar comprehension were conducted with 2 groups of monolingual German children: 32 children/adolescents with DS (chronological age M = 11;01 [years;months]) and a group of 16 typically developing children (chronological age M = 4;00) matched on nonverbal mental age.
Results
Analyses reveal that a substantial number of children/adolescents with DS are impaired in marking verbal agreement and fail to reach an acquisition criterion. The production of word-final consonants succeeds, however, when these consonants do not express verbal agreement. Performance with verbal agreement and nonword repetition are related.
Conclusions
Data indicate that a substantial number of children/adolescents with DS display a deficit in verbal agreement inflection that cannot be attributed to phonetic/phonological problems. The influence of phonological short-term memory on the acquisition of subject–verb agreement has to be further explored.

from #Audiology via ola Kala on Inoreader https://ift.tt/2BInLst
via IFTTT

Quantitative Analysis of Agrammatism in Agrammatic Primary Progressive Aphasia and Dominant Apraxia of Speech

Purpose
The aims of the study were to assess and compare grammatical deficits in written and spoken language production in subjects with agrammatic primary progressive aphasia (agPPA) and in subjects with agrammatism in the context of dominant apraxia of speech (DAOS) and to investigate neuroanatomical correlates.
Method
Eight agPPA and 21 DAOS subjects performed the picture description task of the Western Aphasia Battery (WAB) both in writing and orally. Responses were transcribed and coded for linguistic analysis. agPPA and DAOS were compared to 13 subjects with primary progressive apraxia of speech (PPAOS) who did not have agrammatism. Spearman correlations were performed between the written and spoken variables. Patterns of atrophy in each group were compared, and relationships between the different linguistic measures and integrity of Broca's area were assessed.
Results
agPPA and DAOS both showed lower mean length of utterance, fewer grammatical utterances, more nonutterances, more syntactic and semantic errors, and fewer complex utterances than PPAOS in writing and speech, as well as fewer correct verbs and nouns in speech. Only verb ratio and proportion of grammatical utterances correlated between modalities. agPPA and DAOS both showed greater involvement of Broca's area than PPAOS, and atrophy of Broca's area correlated with proportion of grammatical and ungrammatical utterances and semantic errors in writing and speech.
Conclusions
agPPA and DAOS subjects showed similar patterns of agrammatism, although subjects performed differently when speaking versus writing. Integrity of Broca's area correlates with agrammatism.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MySfy9
via IFTTT

Changes in the Synchrony of Multimodal Communication in Early Language Development

Purpose
The aim of this study is to analyze the changes in temporal synchrony between gesture and speech of multimodal communicative behaviors in the transition from babbling to two-word productions.
Method
Ten Spanish-speaking children were observed at 9, 12, 15, and 18 months of age in a semistructured play situation. We longitudinally analyzed the synchrony between gestures and vocal productions and between their prominent parts. We also explored the relationship between gestural–vocal synchrony and independent measures of language development.
Results
Results showed that multimodal communicative behaviors tend to be shorter with age, with an increasing overlap of its constituting elements. The same pattern is found when considering the synchrony between the prominent parts. The proportion of overlap between gestural and vocal elements at 15 months of age as well as the proportion of the stroke overlapped with vocalization appear to be related to lexical development 3 months later.
Conclusions
These results suggest that children produce gestures and vocalizations as coordinated elements of a single communication system before the transition to the two-word stage. This coordination is related to subsequent lexical development in this period.
Supplemental Material
https://doi.org/10.23641/asha.6912242

from #Audiology via ola Kala on Inoreader https://ift.tt/2vOiDNf
via IFTTT

Code-Switching in Highly Proficient Spanish/English Bilingual Adults: Impact on Masked Word Recognition

Purpose
The purpose of this study was to evaluate the impact of code-switching on Spanish/English bilingual listeners' speech recognition of English and Spanish words in the presence of competing speech-shaped noise.
Method
Participants were Spanish/English bilingual adults (N = 27) who were highly proficient in both languages. Target stimuli were English and Spanish words presented in speech-shaped noise at a −14-dB signal-to-noise ratio. There were 4 target conditions: (a) English only, (b) Spanish only, (c) mixed English, and (d) mixed Spanish. In the mixed-English condition, 75% of the words were in English, whereas 25% of the words were in Spanish. The percentages were reversed in the mixed-Spanish condition.
Results
Accuracy was poorer for the majority (75%) and minority (25%) languages in both mixed-language conditions compared with the corresponding single-language conditions. Results of a follow-up experiment suggest that this finding cannot be explained in terms of an increase in the number of possible response alternatives for each picture in the mixed-language condition relative to the single-language condition.
Conclusions
Results suggest a cost of language mixing on speech perception when bilingual listeners alternate between languages in noisy environments. In addition, the cost of code-switching on speech recognition in noise was similar for both languages in this group of highly proficient Spanish/English bilingual speakers. Differences in response-set size could not account for the poorer results in the mixed-language conditions.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OJ1yNl
via IFTTT

Using the Language ENvironment Analysis (LENA) System to Investigate Cultural Differences in Conversational Turn Count

Purpose
This study investigates how the variables of culture and hearing status might influence the amount of parent–child talk families engage in throughout an average day.
Method
Seventeen Vietnamese and 8 Canadian families of children with hearing loss and 17 Vietnamese and 13 Canadian families with typically hearing children between the ages of 18 and 48 months old participated in this cross-comparison design study. Each child wore a Language ENvironment Analysis system digital language processor for 3 days. An automated vocal analysis then calculated an average conversational turn count (CTC) for each participant as the variable of investigation. The CTCs for the 4 groups were compared using a Kruskal–Wallis test and a set of planned pairwise comparisons.
Results
The Canadian families participated in significantly more conversational turns than the Vietnamese families. No significant difference was found between the Vietnamese or the Canadian cohorts as a function of hearing status.
Conclusions
Culture, but not hearing status, influences CTCs as derived by the Language ENvironment Analysis system. Clinicians should consider how cultural communication practices might influence their suggestions for language stimulation.

from #Audiology via ola Kala on Inoreader https://ift.tt/2AGoCcE
via IFTTT

How Do Age and Hearing Loss Impact Spectral Envelope Perception?

Purpose
The goal was to evaluate the potential effects of increasing hearing loss and advancing age on spectral envelope perception.
Method
Spectral modulation detection was measured as a function of spectral modulation frequency from 0.5 to 8.0 cycles/octave. The spectral modulation task involved discrimination of a noise carrier (3 octaves wide from 400 to 3200 Hz) with a flat spectral envelope from a noise having a sinusoidal spectral envelope across a logarithmic audio frequency scale. Spectral modulation transfer functions (SMTFs; modulation threshold vs. modulation frequency) were computed and compared 4 listener groups: young normal hearing, older normal hearing, older with mild hearing loss, and older with moderate hearing loss. Estimates of the internal spectral contrast were obtained by computing excitation patterns.
Results
SMTFs for young listeners with normal hearing were bandpass with a minimum modulation detection threshold at 2 cycles/octave, and older listeners with normal hearing were remarkably similar to those of the young listeners. SMTFs for older listeners with mild and moderate hearing loss had a low-pass rather than a bandpass shape. Excitation patterns revealed that limited spectral resolution dictated modulation detection thresholds at high but not low spectral modulation frequencies. Even when factoring out (presumed) differences in frequency resolution among groups, the spectral envelope perception was worse for the group with moderate hearing loss than the other 3 groups.
Conclusions
The spectral envelope perception as measured by spectral modulation detection thresholds is compromised by hearing loss at higher spectral modulation frequencies, consistent with predictions of reduced spectral resolution known to accompany sensorineural hearing loss. Spectral envelope perception is not negatively impacted by advancing age at any spectral modulation frequency between 0.5 and 8.0 cycles/octave.

from #Audiology via ola Kala on Inoreader https://ift.tt/2osQTuc
via IFTTT

Pairing New Words With Unfamiliar Objects: Comparing Children With and Without Cochlear Implants

Purpose
This study investigates differences between preschool children with cochlear implants and age-matched children with normal hearing during an initial stage in word learning to evaluate whether they (a) match novel words to unfamiliar objects and (b) solicit information about unfamiliar objects during play.
Method
Twelve preschool children with cochlear implants and 12 children with normal hearing matched for age completed 2 experimental tasks. In the 1st task, children were asked to point to a picture that matched either a known word or a novel word. In the 2nd task, children were presented with unfamiliar objects during play and were given the opportunity to ask questions about those objects.
Results
In Task 1, children with cochlear implants paired novel words with unfamiliar pictures in fewer trials than children with normal hearing. In Task 2, children with cochlear implants were less likely to solicit information about new objects than children with normal hearing. Performance on the 1st task, but not the 2nd, significantly correlated with expressive vocabulary standard scores of children with cochlear implants.
Conclusion
This study provides preliminary evidence that children with cochlear implants approach mapping novel words to and soliciting information about unfamiliar objects differently than children with normal hearing.

from #Audiology via ola Kala on Inoreader https://ift.tt/2ND1pu0
via IFTTT

Reliability of Measures of N1 Peak Amplitude of the Compound Action Potential in Younger and Older Adults

Purpose
Human auditory nerve (AN) activity estimated from the amplitude of the first prominent negative peak (N1) of the compound action potential (CAP) is typically quantified using either a peak-to-peak measurement or a baseline-corrected measurement. However, the reliability of these 2 common measurement techniques has not been evaluated but is often assumed to be relatively poor, especially for older adults. To address this question, the current study (a) compared test–retest reliability of these 2 methods and (b) tested the extent to which measurement type affected the relationship between N1 amplitude and experimental factors related to the stimulus (higher and lower intensity levels) and participants (younger and older adults).
Method
Click-evoked CAPs were recorded in 24 younger (aged 18–30 years) and 20 older (aged 55–85 years) adults with clinically normal audiograms up to 3000 Hz. N1 peak amplitudes were estimated from peak-to-peak measurements (from N1 to P1) and baseline-corrected measurements for 2 stimulus levels (80 and 110 dB pSPL). Baseline-corrected measurements were made with 4 baseline windows. Each stimulus level was presented twice, and test–retest reliability of these 2 measures was assessed using the intraclass correlation coefficient. Linear mixed models were used to evaluate the extent to which age group and click level uniquely predicted N1 amplitude and whether the predictive relationships differed between N1 measurement techniques.
Results
Both peak-to-peak and baseline-corrected measurements of N1 amplitude were found to have good-to-excellent reliability, with intraclass correlation coefficient values > 0.60. As expected, N1 amplitudes were significantly larger for younger participants compared with older participants for both measurement types and were significantly larger in response to clicks presented at 110 dB pSPL than at 80 dB pSPL for both measurement types. Furthermore, the choice of baseline window had no significant effect on N1 amplitudes using the baseline-corrected method.
Conclusions
Our results suggest that measurements of AN activity can be robustly and reliably recorded in both younger and older adults using either peak-to-peak or baseline-corrected measurements of the N1 of the CAP. Peak-to-peak measurements yield larger N1 response amplitudes and are the default measurement type for many clinical systems, whereas baseline-corrected measurements are computationally simpler. Furthermore, the relationships between AN activity and stimulus- and participant-related variables were not affected by measurement technique, which suggests that these relationships can be compared across studies using different techniques for measuring the CAP N1.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NGIP7K
via IFTTT

Perceived Voice Quality and Voice-Related Problems Among Older Adults With Hearing Impairments

The auditory system helps regulate phonation. A speaker's perception of their own voice is likely to be of both emotional and functional significance. Although many investigations have observed deviating voice qualities in individuals who are prelingually deaf or profoundly hearing impaired, less is known regarding how older adults with acquired hearing impairments perceive their own voice and potential voice problems.
Purpose
The purpose of this study was to investigate problems relating to phonation and self-perceived voice sound quality in older adults based on hearing ability and the use of hearing aids.
Method
This was a cross-sectional study, with 290 participants divided into 3 groups (matched by age and gender): (a) individuals with hearing impairments who did not use hearing aids (n = 110), (b) individuals with hearing impairments who did use hearing aids (n = 110), and (c) individuals with no hearing impairments (n = 70). All participants underwent a pure-tone audiometry exam; completed standardized questionnaires regarding their hearing, voice, and general health; and were recorded speaking in a soundproof room.
Results
The hearing aid users surpassed the benchmarks for having a voice disorder on the Voice Handicap Index (VHI; Jacobson et al., 1997) at almost double the rate predicted by the Swedish normative values for their age range, although there was no significant difference in acoustical measures between any of the groups. Both groups with hearing impairments scored significantly higher on the VHI than the control group, indicating more impairment. It remains inconclusive how much hearing loss versus hearing aids separately contribute to the difference in voice problems. The total scores on the Hearing Handicap Inventory for the Elderly (Ventry & Weinstein, 1982), in combination with the variables gender and age, explained 21.9% of the variance on the VHI. Perceiving one's own voice as being distorted, dull, or hollow had a strong negative association with a general satisfaction about the sound quality of one's own voice. In addition, groupwise differences in own-voice descriptions suggest that a negative perception of one's voice could be influenced by alterations caused by hearing aid processing.
Conclusions
The results indicate that hearing impairments and hearing aids affect several aspects of vocal satisfaction in older adults. A greater understanding of how hearing impairments and hearing aids relate to voice problems may contribute to better voice and hearing care.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NrMGlu
via IFTTT

Perceptual Encoding in Auditory Brainstem Responses: Effects of Stimulus Frequency

Purpose
A central question about auditory perception concerns how acoustic information is represented at different stages of processing. The auditory brainstem response (ABR) provides a potentially useful index of the earliest stages of this process. However, it is unclear how basic acoustic characteristics (e.g., differences in tones spanning a wide range of frequencies) are indexed by ABR components. This study addresses this by investigating how ABR amplitude and latency track stimulus frequency for tones ranging from 250 to 8000 Hz.
Method
In a repeated-measures experimental design, listeners were presented with brief tones (250, 500, 1000, 2000, 4000, and 8000 Hz) in random order while electroencephalography was recorded. ABR latencies and amplitudes for Wave V (6–9 ms) and in the time window following the Wave V peak (labeled as Wave VI; 9–12 ms) were measured.
Results
Wave V latency decreased with increasing frequency, replicating previous work. In addition, Waves V and VI amplitudes tracked differences in tone frequency, with a nonlinear response from 250 to 8000 Hz and a clear log-linear response to tones from 500 to 8000 Hz.
Conclusions
Results demonstrate that the ABR provides a useful measure of early perceptual encoding for stimuli varying in frequency and that the tonotopic organization of the auditory system is preserved at this stage of processing for stimuli from 500 to 8000 Hz. Such a measure may serve as a useful clinical tool for evaluating a listener's ability to encode specific frequencies in sounds.
Supplemental Material
https://doi.org/10.23641/asha.6987422

from #Audiology via ola Kala on Inoreader https://ift.tt/2wL00Ly
via IFTTT

Exploring the Effects of Imitating Hand Gestures and Head Nods on L1 and L2 Mandarin Tone Production

Purpose
This study investigated the impact of metaphoric actions—head nods and hand gestures—in producing Mandarin tones for first language (L1) and second language (L2) speakers.
Method
In 2 experiments, participants imitated videos of Mandarin tones produced under 3 conditions: (a) speech alone, (b) speech + head nods, and (c) speech + hand gestures. Fundamental frequency was recorded for both L1 (Experiment 1) and L2 (Experiment 2a) speakers, and the output of the L2 speakers was rated for tonal accuracy by 7 native Mandarin judges (Experiment 2b).
Results
Experiment 1 showed that 12 L1 speakers' fundamental frequency spectral data did not differ among the 3 conditions. In Experiment 2a, the conditions did not affect the production of 24 English speakers for the most part, but there was some evidence that hand gestures helped Tone 4. In Experiment 2b, native Mandarin judges found limited conditional differences in L2 productions, with Tone 3 showing a slight head nods benefit in a subset of “correct” L2 tokens.
Conclusion
Results suggest that metaphoric bodily actions do not influence the lowest levels of L1 speech production in a tonal language and may play a very modest role during preliminary L2 learning.

from #Audiology via ola Kala on Inoreader https://ift.tt/2LWYoTO
via IFTTT

The Effects of Syntactic Complexity and Sentence Length on the Speech Motor Control of School-Age Children Who Stutter

Purpose
Early childhood stuttering is associated with atypical speech motor development. Compared with children who do not stutter (CWNS), the speech motor systems of school-age children who stutter (CWS) may also be particularly susceptible to breakdown under increased processing demands. The effects of increased syntactic complexity and sentence length on articulatory coordination were investigated.
Method
Kinematic, temporal, and behavioral indices of articulatory coordination were quantified for school-age CWS (n = 19) and CWNS (n = 18). Participants produced 4 sentences varying in syntactic complexity (simple declarative/complex declarative with a relative clause) and sentence length (short/long). Lip aperture variability (LAVar) served as a kinematic measure of interarticulatory consistency over repeated productions. Articulation rate (syllables per second) was also calculated as a related temporal measure. Finally, we computed accuracy and stuttering frequency percentages for each sentence to assess task performance.
Results
Increased sentence length, but not syntactic complexity, increased LAVar in both groups. This effect was disproportionately greater for CWS compared with CWNS. No group differences were observed for articulation rate. CWS were also less accurate in their sentence productions than fluent peers and exhibited more instances of stuttering when processing demands associated with length and syntactic complexity increases.
Conclusions
The speech motor systems of school-age CWS appear to be particularly vulnerable to processing demands associated with increased sentence length, as evidenced by increased LAVar. Increasing the length and complexity of the sentence stimuli also resulted in reduced production accuracy and increased stuttering frequency. We discuss these findings within a motor control framework of speech production.

from #Audiology via ola Kala on Inoreader https://ift.tt/2L4rXSS
via IFTTT

A Simple Method to Obtain Basic Acoustic Measures From Video Recordings as Subtitles

Purpose
Sound pressure level (SPL) and fundamental frequency (fo) are very basic and important measures in the acoustical assessment of voice quality, and their variation influences also the vocal fold vibration characteristics. Most sophisticated laryngeal videostroboscopic systems therefore also measure and display the SPL and fo values directly over the video frames by means of a rather expensive special hardware setup. An alternative simple software-based method is presented here to obtain these measures as video subtitles.
Method
The software extracts acoustic data from the video recording, calculates the SPL and fo parameters, and saves their values in a separate subtitle file. To ensure the correct SPL values, the microphone signal is calibrated beforehand with a sound level meter.
Results
The new approach was tested on videokymographic recordings obtained laryngoscopically. The results of SPL and fo values calculated from the videokymographic recording, subtitles creation, and their display are presented.
Conclusions
This method is useful in integrating the acoustic measures with any kind of video recordings containing audio data when inbuilt hardware means are not available. However, calibration and other technical aspects related to data acquisition and synchronization described in this article should be properly taken care of during the recording.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NtdphE
via IFTTT

The Impact of Exposure With No Training: Implications for Future Partner Training Research

Purpose
This research note reports on an unexpected negative finding related to behavior change in a controlled trial designed to test whether partner training improves the conversational skills of volunteers.
Method
The clinical trial involving training in “Supported Conversation for Adults with Aphasia” utilized a single-blind, randomized, controlled, pre–post design. Eighty participants making up 40 dyads of a volunteer conversation partner and an adult with aphasia were randomly allocated to either an experimental or control group of 20 dyads each. Descriptive statistics including exact 95% confidence intervals were calculated for the percentage of control group participants who got worse after exposure to individuals with aphasia.
Results
Positive outcomes of training in Supported Conversation for Adults with Aphasia for both the trained volunteers and their partners with aphasia were reported by Kagan, Black, Felson Duchan, Simmons-Mackie, and Square in 2001. However, post hoc data analysis revealed that almost one third of untrained control participants had a negative outcome rather than the anticipated neutral or slightly positive outcome.
Conclusions
If the results of this small study are in any way representative of what happens in real life, communication partner training in aphasia becomes even more important than indicated from the positive results of training studies. That is, it is possible that mere exposure to a communication disability such as aphasia could have negative impacts on communication and social interaction. This may be akin to what is known as a “nocebo” effect—something for partner training studies in aphasia to take into account.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NFtKTA
via IFTTT

A Phonetic Complexity-Based Approach for Intelligibility and Articulatory Precision Testing: A Preliminary Study on Talkers With Amyotrophic Lateral Sclerosis

Purpose
This study describes a phonetic complexity-based approach for speech intelligibility and articulatory precision testing using preliminary data from talkers with amyotrophic lateral sclerosis.
Method
Eight talkers with amyotrophic lateral sclerosis and 8 healthy controls produced a list of 16 low and high complexity words. Sixty-four listeners judged the samples for intelligibility, and 2 trained listeners completed phoneme-level analysis to determine articulatory precision. To estimate percent intelligibility, listeners orthographically transcribed each word, and the transcriptions were scored as being either accurate or inaccurate. Percent articulatory precision was calculated based on the experienced listeners' judgments of phoneme distortions, deletions, additions, and/or substitutions for each word. Articulation errors were weighted based on the perceived impact on intelligibility to determine word-level precision.
Results
Between-groups differences in word intelligibility and articulatory precision were significant at lower levels of phonetic complexity as dysarthria severity increased. Specifically, more severely impaired talkers showed significant reductions in word intelligibility and precision at both complexity levels, whereas those with milder speech impairments displayed intelligibility reductions only for more complex words. Articulatory precision was less sensitive to mild dysarthria compared to speech intelligibility for the proposed complexity-based approach.
Conclusions
Considering phonetic complexity for dysarthria tests could result in more sensitive assessments for detecting and monitoring dysarthria progression.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MnQUJu
via IFTTT

Diagnosing Middle Ear Pathology in 6- to 9-Month-Old Infants Using Wideband Absorbance: A Risk Prediction Model

Purpose
The aim of this study was to develop a risk prediction model for detecting middle ear pathology in 6- to 9-month-old infants using wideband absorbance measures.
Method
Two hundred forty-nine infants aged 23–39 weeks (Mdn = 28 weeks) participated in the study. Distortion product otoacoustic emissions and high-frequency tympanometry were tested in both ears of each infant to assess middle ear function. Wideband absorbance was measured at ambient pressure in each participant from 226 to 8000 Hz. Absorbance results from 1 ear of each infant were used to predict middle ear dysfunction, using logistic regression. To develop a model likely to generalize to new infants, the number of variables was reduced using principal component analysis, and a penalty was applied when fitting the model. The model was validated using the opposite ears and with bootstrap resampling. Model performance was evaluated through measures of discrimination and calibration. Discrimination was assessed with the area under the receiver operating characteristic curve (AUC); and calibration, with calibration curves, which plotted actual against predicted probabilities.
Results
AUC of the fitted model was 0.887. The model validated adequately when applied to the opposite ears (AUC = 0.852) and with bootstrap resampling (AUC = 0.874). Calibration was satisfactory, with high agreement between predictions and observed results.
Conclusions
The risk prediction model had accurate discrimination and satisfactory calibration. Validation results indicate that it may generalize well to new infants. The model could potentially be used in diagnostic and screening settings. In the context of screening, probabilities provide an intuitive and flexible mechanism for setting the referral threshold that is sensitive to the costs associated with true and false-positive outcomes. In a diagnostic setting, predictions could be used to supplement visual inspection of absorbance for individualized diagnoses. Further research assessing the performance and impact of the model in these contexts is warranted.

from #Audiology via ola Kala on Inoreader https://ift.tt/2CxXxcI
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2BcbUTf
via IFTTT

Pure-Tone Frequency Discrimination in Preschoolers, Young School-Age Children, and Adults

Purpose
Published data indicate nearly adultlike frequency discrimination in infants but large child–adult differences for school-age children. This study evaluated the role that differences in measurement procedures and stimuli may have played in the apparent nonmonotonicity. Frequency discrimination was assessed in preschoolers, young school-age children, and adults using stimuli and procedures that have previously been used to test infants.
Method
Listeners were preschoolers (3–4 years), young school-age children (5–6 years), and adults (19–38 years). Performance was assessed using a single-interval, observer-based method and a continuous train of stimuli, similar to that previously used to evaluate infants. Testing was completed using 500- and 5000-Hz standard tones, fixed within a set of trials. Thresholds for frequency discrimination were obtained using an adaptive, two-down one-up procedure. Adults and most school-age children responded by raising their hands. An observer-based, conditioned-play response was used to test preschoolers and those school-age children for whom the hand-raise procedure was not effective for conditioning.
Results
Results suggest an effect of age and frequency on thresholds but no interaction between these 2 factors. A lower proportion of preschoolers completed training compared with young school-age children. For those children who completed training, however, thresholds did not improve significantly with age; both groups of children performed more poorly than adults. Performance was better for the 500-Hz standard frequency compared with the 5000-Hz standard frequency.
Conclusions
Thresholds for school-age children were broadly similar to those previously observed using a forced-choice procedure. Although there was a trend for improved performance with increasing age, no significant age effect was observed between preschoolers and school-age children. The practice of excluding participants based on failure to meet conditioning criteria in an observer-based task could contribute to the relatively good performance observed for preschoolers in this study and the adultlike performance previously observed in infants.

from #Audiology via ola Kala on Inoreader https://ift.tt/2wqgl8d
via IFTTT

Erratum



from #Audiology via ola Kala on Inoreader https://ift.tt/2Cph6Uo
via IFTTT

Morphosyntax Production of Preschool Children With Hearing Loss: An Evaluation of the Extended Optional Infinitive and Surface Accounts

Purpose
The first aim of this study was to explore differences in profiles of morphosyntax production of preschool children with hearing loss (CHL) relative to age- and language-matched comparison groups. The second aim was to explore the potential of extending 2 long-standing theoretical accounts of morphosyntax weakness in children with specific language impairment to preschool CHL.
Method
This study examined conversational language samples to describe the accuracy and type of inaccurate productions of Brown's grammatical morphemes in 18 preschool CHL as compared with an age-matched group (±3 months, n = 18) and a language-matched group (±1 raw score point on an expressive language subtest, n = 18). Age ranged from 45 to 62 months. Performance across groups was compared. In addition, production accuracy of CHL on morphemes that varied by tense and duration was compared to assess the validity of extending theoretical accounts of children with specific language impairment to CHL.
Results
CHL exhibited particular difficulty with morphosyntax relative to other aspects of language. In addition, differences across groups on accuracy and type of inaccurate productions were observed. Finally, a unified approach to explaining morphosyntax weakness in CHL was more appropriate than a linguistic- or perceptual-only approach.
Conclusions
Taken together, the findings of this study support a unified theoretical account of morphosyntax weakness in CHL in which both tense and duration of morphemes play a role in morphosyntax production accuracy, with a more robust role for tense than duration.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MlUkMX
via IFTTT

Verbal Agreement Inflection in German Children With Down Syndrome

Purpose
The study aims to explore whether finite verbal morphology is affected in children/adolescents with Down syndrome (DS), whether observed deficits in this domain are indicative of a delayed or deviant development, and whether they are due to phonetic/phonological problems or deficits in phonological short-term memory.
Method
An elicitation task on subject–verb agreement, a picture-naming task targeting stem-final consonants that also express verbal agreement, a nonword repetition task, and a test on grammar comprehension were conducted with 2 groups of monolingual German children: 32 children/adolescents with DS (chronological age M = 11;01 [years;months]) and a group of 16 typically developing children (chronological age M = 4;00) matched on nonverbal mental age.
Results
Analyses reveal that a substantial number of children/adolescents with DS are impaired in marking verbal agreement and fail to reach an acquisition criterion. The production of word-final consonants succeeds, however, when these consonants do not express verbal agreement. Performance with verbal agreement and nonword repetition are related.
Conclusions
Data indicate that a substantial number of children/adolescents with DS display a deficit in verbal agreement inflection that cannot be attributed to phonetic/phonological problems. The influence of phonological short-term memory on the acquisition of subject–verb agreement has to be further explored.

from #Audiology via ola Kala on Inoreader https://ift.tt/2BInLst
via IFTTT

Quantitative Analysis of Agrammatism in Agrammatic Primary Progressive Aphasia and Dominant Apraxia of Speech

Purpose
The aims of the study were to assess and compare grammatical deficits in written and spoken language production in subjects with agrammatic primary progressive aphasia (agPPA) and in subjects with agrammatism in the context of dominant apraxia of speech (DAOS) and to investigate neuroanatomical correlates.
Method
Eight agPPA and 21 DAOS subjects performed the picture description task of the Western Aphasia Battery (WAB) both in writing and orally. Responses were transcribed and coded for linguistic analysis. agPPA and DAOS were compared to 13 subjects with primary progressive apraxia of speech (PPAOS) who did not have agrammatism. Spearman correlations were performed between the written and spoken variables. Patterns of atrophy in each group were compared, and relationships between the different linguistic measures and integrity of Broca's area were assessed.
Results
agPPA and DAOS both showed lower mean length of utterance, fewer grammatical utterances, more nonutterances, more syntactic and semantic errors, and fewer complex utterances than PPAOS in writing and speech, as well as fewer correct verbs and nouns in speech. Only verb ratio and proportion of grammatical utterances correlated between modalities. agPPA and DAOS both showed greater involvement of Broca's area than PPAOS, and atrophy of Broca's area correlated with proportion of grammatical and ungrammatical utterances and semantic errors in writing and speech.
Conclusions
agPPA and DAOS subjects showed similar patterns of agrammatism, although subjects performed differently when speaking versus writing. Integrity of Broca's area correlates with agrammatism.

from #Audiology via ola Kala on Inoreader https://ift.tt/2MySfy9
via IFTTT

Changes in the Synchrony of Multimodal Communication in Early Language Development

Purpose
The aim of this study is to analyze the changes in temporal synchrony between gesture and speech of multimodal communicative behaviors in the transition from babbling to two-word productions.
Method
Ten Spanish-speaking children were observed at 9, 12, 15, and 18 months of age in a semistructured play situation. We longitudinally analyzed the synchrony between gestures and vocal productions and between their prominent parts. We also explored the relationship between gestural–vocal synchrony and independent measures of language development.
Results
Results showed that multimodal communicative behaviors tend to be shorter with age, with an increasing overlap of its constituting elements. The same pattern is found when considering the synchrony between the prominent parts. The proportion of overlap between gestural and vocal elements at 15 months of age as well as the proportion of the stroke overlapped with vocalization appear to be related to lexical development 3 months later.
Conclusions
These results suggest that children produce gestures and vocalizations as coordinated elements of a single communication system before the transition to the two-word stage. This coordination is related to subsequent lexical development in this period.
Supplemental Material
https://doi.org/10.23641/asha.6912242

from #Audiology via ola Kala on Inoreader https://ift.tt/2vOiDNf
via IFTTT

Code-Switching in Highly Proficient Spanish/English Bilingual Adults: Impact on Masked Word Recognition

Purpose
The purpose of this study was to evaluate the impact of code-switching on Spanish/English bilingual listeners' speech recognition of English and Spanish words in the presence of competing speech-shaped noise.
Method
Participants were Spanish/English bilingual adults (N = 27) who were highly proficient in both languages. Target stimuli were English and Spanish words presented in speech-shaped noise at a −14-dB signal-to-noise ratio. There were 4 target conditions: (a) English only, (b) Spanish only, (c) mixed English, and (d) mixed Spanish. In the mixed-English condition, 75% of the words were in English, whereas 25% of the words were in Spanish. The percentages were reversed in the mixed-Spanish condition.
Results
Accuracy was poorer for the majority (75%) and minority (25%) languages in both mixed-language conditions compared with the corresponding single-language conditions. Results of a follow-up experiment suggest that this finding cannot be explained in terms of an increase in the number of possible response alternatives for each picture in the mixed-language condition relative to the single-language condition.
Conclusions
Results suggest a cost of language mixing on speech perception when bilingual listeners alternate between languages in noisy environments. In addition, the cost of code-switching on speech recognition in noise was similar for both languages in this group of highly proficient Spanish/English bilingual speakers. Differences in response-set size could not account for the poorer results in the mixed-language conditions.

from #Audiology via ola Kala on Inoreader https://ift.tt/2OJ1yNl
via IFTTT

Using the Language ENvironment Analysis (LENA) System to Investigate Cultural Differences in Conversational Turn Count

Purpose
This study investigates how the variables of culture and hearing status might influence the amount of parent–child talk families engage in throughout an average day.
Method
Seventeen Vietnamese and 8 Canadian families of children with hearing loss and 17 Vietnamese and 13 Canadian families with typically hearing children between the ages of 18 and 48 months old participated in this cross-comparison design study. Each child wore a Language ENvironment Analysis system digital language processor for 3 days. An automated vocal analysis then calculated an average conversational turn count (CTC) for each participant as the variable of investigation. The CTCs for the 4 groups were compared using a Kruskal–Wallis test and a set of planned pairwise comparisons.
Results
The Canadian families participated in significantly more conversational turns than the Vietnamese families. No significant difference was found between the Vietnamese or the Canadian cohorts as a function of hearing status.
Conclusions
Culture, but not hearing status, influences CTCs as derived by the Language ENvironment Analysis system. Clinicians should consider how cultural communication practices might influence their suggestions for language stimulation.

from #Audiology via ola Kala on Inoreader https://ift.tt/2AGoCcE
via IFTTT

How Do Age and Hearing Loss Impact Spectral Envelope Perception?

Purpose
The goal was to evaluate the potential effects of increasing hearing loss and advancing age on spectral envelope perception.
Method
Spectral modulation detection was measured as a function of spectral modulation frequency from 0.5 to 8.0 cycles/octave. The spectral modulation task involved discrimination of a noise carrier (3 octaves wide from 400 to 3200 Hz) with a flat spectral envelope from a noise having a sinusoidal spectral envelope across a logarithmic audio frequency scale. Spectral modulation transfer functions (SMTFs; modulation threshold vs. modulation frequency) were computed and compared 4 listener groups: young normal hearing, older normal hearing, older with mild hearing loss, and older with moderate hearing loss. Estimates of the internal spectral contrast were obtained by computing excitation patterns.
Results
SMTFs for young listeners with normal hearing were bandpass with a minimum modulation detection threshold at 2 cycles/octave, and older listeners with normal hearing were remarkably similar to those of the young listeners. SMTFs for older listeners with mild and moderate hearing loss had a low-pass rather than a bandpass shape. Excitation patterns revealed that limited spectral resolution dictated modulation detection thresholds at high but not low spectral modulation frequencies. Even when factoring out (presumed) differences in frequency resolution among groups, the spectral envelope perception was worse for the group with moderate hearing loss than the other 3 groups.
Conclusions
The spectral envelope perception as measured by spectral modulation detection thresholds is compromised by hearing loss at higher spectral modulation frequencies, consistent with predictions of reduced spectral resolution known to accompany sensorineural hearing loss. Spectral envelope perception is not negatively impacted by advancing age at any spectral modulation frequency between 0.5 and 8.0 cycles/octave.

from #Audiology via ola Kala on Inoreader https://ift.tt/2osQTuc
via IFTTT

Effect of Probe-Tone Frequency on Ipsilateral and Contralateral Electrical Stapedius Reflex Measurement in Children With Cochlear Implants

Objectives: The upper loudness limit of electrical stimulation in cochlear implant patients is sometimes set using electrically elicited stapedius reflex thresholds (eSRTs), especially in children for whom reporting skills may be limited. In unilateral cochlear implant patients, eSRT levels are measured typically in the contralateral unimplanted ear because the ability to measure eSRTs in the implanted ear is likely to be limited due to the cochlear implant surgery and consequential changes in middle ear dynamics. This practice is particularly limiting in the case of fitting bilaterally implanted pediatric cases because there is no unimplanted ear option to choose for eSRT measurement. The goal of this study was to identify an improved measurement protocol to increase the success of eSRT measurement in ipsilateral or contralateral or both implanted ears of pediatric cochlear implant recipients. This work hypothesizes that use of a higher probe frequency (e.g., 1000 Hz compared with the 226 Hz standard), which is closer to the mechanical middle ear resonant frequency, may be more effective in measuring middle ear muscle contraction in either ear. Design: In the present study, eSRTs were measured using multiple probe frequencies (226, 678, and 1000 Hz) in the ipsilateral and contralateral ears of 19 children with unilateral Advanced Bionics (AB) cochlear implants (mean age = 8.6 years, SD = 2.29). An integrated middle ear analyzer designed by AB was used to elicit and detect stapedius reflexes and assign eSRT levels. In the integrated middle ear analyzer system, an Interacoustics Titan middle ear analyzer was used to perform middle ear measurements in synchrony with research software running on an AB Neptune speech processor, which controlled the delivery of electrical pulse trains at varying levels to the test subject. Changes in middle ear acoustic admittance following an electrical pulse train stimulus indicated the occurrence of an electrically elicited stapedius reflex. Results: Of the 19 ears tested, ipsilateral eSRTs were successfully measured in 3 (16%), 4 (21%), and 7 (37%) ears using probe tones of 226, 678, and 1000 Hz, respectively. Contralateral eSRT levels were measured in 11 (58%), 13 (68%), and 13 (68%) ears using the three different probe frequencies, respectively. A significant difference was found in the incidence of successful eSRT measurement as a function of probe frequency in the ipsilateral ears with the greatest pair-wise difference between the 226 and 1000 Hz probe. A significant increase in contralateral eSRT measurement success as a function of probe frequency was not found. These findings are consistent with the idea that changes in middle ear mechanics, secondary to cochlear implant surgery, may interfere with the detection of stapedius muscle contraction in the ipsilateral middle ear. The best logistic, mixed-effects model of the occurrence of successful eSRT measures included ear of measurement and probe frequency as significant fixed effects. No significant differences in average eSRT levels were observed across ipsilateral and contralateral measurements or as a function of probe frequency. Conclusion: Typically, measurement of stapedius reflexes is less successful in the implanted ears of cochlear implant recipients compared with measurements in the contralateral, unimplanted ear. The ability to measure eSRT levels ipsilaterally can be improved by using a higher probe frequency. ACKNOWLEDGMENTS: All authors listed in the article contributed toward the work reported in this submission. Lizette Carranco Hernandez, Lisette Cristerna Sánchez, Miriam Camacho Olivares, Carina Rodríguez, and Aniket A. Saoji were involved in data collection and interpretation of the data. Aniket A. Saoji and Charles C. Finley were involved in experimental software and hardware set up, statistical analysis, and preparation of the article. Author Carina Rodriguez is an Advanced Bionics employee (a cochlear implant manufacturer). Authors Charles C. Finley and Aniket A. Saoji were employed with Advanced Bionics when this study was completed. Address for correspondence: Aniket Saoji, PhD, CCC-A, Dept. of Otolaryngology, Mayo Clinic, 200 1st St SW, Rochester, MN 55902 E-mail: aniket.saoji@gmail.com Received August 9, 2017; accepted July 11, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2O1fkxg
via IFTTT

Effect of Probe-Tone Frequency on Ipsilateral and Contralateral Electrical Stapedius Reflex Measurement in Children With Cochlear Implants

Objectives: The upper loudness limit of electrical stimulation in cochlear implant patients is sometimes set using electrically elicited stapedius reflex thresholds (eSRTs), especially in children for whom reporting skills may be limited. In unilateral cochlear implant patients, eSRT levels are measured typically in the contralateral unimplanted ear because the ability to measure eSRTs in the implanted ear is likely to be limited due to the cochlear implant surgery and consequential changes in middle ear dynamics. This practice is particularly limiting in the case of fitting bilaterally implanted pediatric cases because there is no unimplanted ear option to choose for eSRT measurement. The goal of this study was to identify an improved measurement protocol to increase the success of eSRT measurement in ipsilateral or contralateral or both implanted ears of pediatric cochlear implant recipients. This work hypothesizes that use of a higher probe frequency (e.g., 1000 Hz compared with the 226 Hz standard), which is closer to the mechanical middle ear resonant frequency, may be more effective in measuring middle ear muscle contraction in either ear. Design: In the present study, eSRTs were measured using multiple probe frequencies (226, 678, and 1000 Hz) in the ipsilateral and contralateral ears of 19 children with unilateral Advanced Bionics (AB) cochlear implants (mean age = 8.6 years, SD = 2.29). An integrated middle ear analyzer designed by AB was used to elicit and detect stapedius reflexes and assign eSRT levels. In the integrated middle ear analyzer system, an Interacoustics Titan middle ear analyzer was used to perform middle ear measurements in synchrony with research software running on an AB Neptune speech processor, which controlled the delivery of electrical pulse trains at varying levels to the test subject. Changes in middle ear acoustic admittance following an electrical pulse train stimulus indicated the occurrence of an electrically elicited stapedius reflex. Results: Of the 19 ears tested, ipsilateral eSRTs were successfully measured in 3 (16%), 4 (21%), and 7 (37%) ears using probe tones of 226, 678, and 1000 Hz, respectively. Contralateral eSRT levels were measured in 11 (58%), 13 (68%), and 13 (68%) ears using the three different probe frequencies, respectively. A significant difference was found in the incidence of successful eSRT measurement as a function of probe frequency in the ipsilateral ears with the greatest pair-wise difference between the 226 and 1000 Hz probe. A significant increase in contralateral eSRT measurement success as a function of probe frequency was not found. These findings are consistent with the idea that changes in middle ear mechanics, secondary to cochlear implant surgery, may interfere with the detection of stapedius muscle contraction in the ipsilateral middle ear. The best logistic, mixed-effects model of the occurrence of successful eSRT measures included ear of measurement and probe frequency as significant fixed effects. No significant differences in average eSRT levels were observed across ipsilateral and contralateral measurements or as a function of probe frequency. Conclusion: Typically, measurement of stapedius reflexes is less successful in the implanted ears of cochlear implant recipients compared with measurements in the contralateral, unimplanted ear. The ability to measure eSRT levels ipsilaterally can be improved by using a higher probe frequency. ACKNOWLEDGMENTS: All authors listed in the article contributed toward the work reported in this submission. Lizette Carranco Hernandez, Lisette Cristerna Sánchez, Miriam Camacho Olivares, Carina Rodríguez, and Aniket A. Saoji were involved in data collection and interpretation of the data. Aniket A. Saoji and Charles C. Finley were involved in experimental software and hardware set up, statistical analysis, and preparation of the article. Author Carina Rodriguez is an Advanced Bionics employee (a cochlear implant manufacturer). Authors Charles C. Finley and Aniket A. Saoji were employed with Advanced Bionics when this study was completed. Address for correspondence: Aniket Saoji, PhD, CCC-A, Dept. of Otolaryngology, Mayo Clinic, 200 1st St SW, Rochester, MN 55902 E-mail: aniket.saoji@gmail.com Received August 9, 2017; accepted July 11, 2018. Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.

from #Audiology via ola Kala on Inoreader https://ift.tt/2O1fkxg
via IFTTT

Language Sample Practices With Children Who Are Deaf and Hard of Hearing

Purpose
In this study, we aimed to identify common language sample practices of professionals who work with children who are Deaf/hard of hearing (DHH) who use listening and spoken language as a means to better understand why and how language sampling can be utilized by speech-language pathologists serving this population.
Method
An electronic questionnaire was disseminated to professionals who serve children who are DHH and use listening and spoken language in the United States. Participant responses were coded in an Excel file and checked for completeness. Descriptive statistics were used to analyze trends.
Results
A total of 168 participants participated in the survey. A majority of participants reported that they use language sampling as a part of their intervention when working with children who are DHH. However, approximately half of participants reported using norm-referenced testing most often when evaluating language of children who are DHH, regardless of the fact that they felt that language samples were more sensitive in identifying the errors of children who are DHH. Participants reported using language samples to monitor progress and set goals for clients. Participants rarely used language samples for eligibility and interprofessional collaboration.
Conclusions
Language samples offer a unique way to examine a child's language development that norm-referenced assessments are not sensitive enough to detect, particularly for children who are DHH. This offers insights into current practice and implications for the development of a more clearly defined language sample protocol to guide practices in the use of language samples with children who are DHH and use listening and spoken language.

from #Audiology via ola Kala on Inoreader https://ift.tt/2pmNfCF
via IFTTT

Language Sample Practices With Children Who Are Deaf and Hard of Hearing

Purpose
In this study, we aimed to identify common language sample practices of professionals who work with children who are Deaf/hard of hearing (DHH) who use listening and spoken language as a means to better understand why and how language sampling can be utilized by speech-language pathologists serving this population.
Method
An electronic questionnaire was disseminated to professionals who serve children who are DHH and use listening and spoken language in the United States. Participant responses were coded in an Excel file and checked for completeness. Descriptive statistics were used to analyze trends.
Results
A total of 168 participants participated in the survey. A majority of participants reported that they use language sampling as a part of their intervention when working with children who are DHH. However, approximately half of participants reported using norm-referenced testing most often when evaluating language of children who are DHH, regardless of the fact that they felt that language samples were more sensitive in identifying the errors of children who are DHH. Participants reported using language samples to monitor progress and set goals for clients. Participants rarely used language samples for eligibility and interprofessional collaboration.
Conclusions
Language samples offer a unique way to examine a child's language development that norm-referenced assessments are not sensitive enough to detect, particularly for children who are DHH. This offers insights into current practice and implications for the development of a more clearly defined language sample protocol to guide practices in the use of language samples with children who are DHH and use listening and spoken language.

from #Audiology via ola Kala on Inoreader https://ift.tt/2pmNfCF
via IFTTT

Assistant Professor Stephanie Riès-Cornou Receives Prestigious New Century Scholars Research Grant from the American Speech-Language Hearing Foundation

Stephanie Riès-CornouSLHS Assistant Professor Stephanie Riès-Cornou

In collaboration with Drs. Henrike Blumenfeld and Tracy Love, this project will test the mechanisms by which Spanish-English bilinguals accomplish word retrieval after stroke to the left hemisphere. Bilingual stroke patients with word-retrieval deficits face additional challenges of having to overcome cross-linguistic interference when they retrieve words as they speak.

Dr. Stephanie Riès-Cornou and colleagues will use a combination of behavioral, eye-tracking, and spatially-enhanced EEG to specify the temporal dynamics of word retrieval. Results of this research may provide treatment targets for future studies focused on enhancing recovery from word retrieval deficits in bilingual stroke patients.



from #Audiology via ola Kala on Inoreader https://ift.tt/2OAvVoM
via IFTTT

Assistant Professor Stephanie Riès-Cornou Receives Prestigious New Century Scholars Research Grant from the American Speech-Language Hearing Foundation

Stephanie Riès-CornouSLHS Assistant Professor Stephanie Riès-Cornou

In collaboration with Drs. Henrike Blumenfeld and Tracy Love, this project will test the mechanisms by which Spanish-English bilinguals accomplish word retrieval after stroke to the left hemisphere. Bilingual stroke patients with word-retrieval deficits face additional challenges of having to overcome cross-linguistic interference when they retrieve words as they speak.

Dr. Stephanie Riès-Cornou and colleagues will use a combination of behavioral, eye-tracking, and spatially-enhanced EEG to specify the temporal dynamics of word retrieval. Results of this research may provide treatment targets for future studies focused on enhancing recovery from word retrieval deficits in bilingual stroke patients.



from #Audiology via ola Kala on Inoreader https://ift.tt/2OAvVoM
via IFTTT

Examining Factors Influencing the Viability of Automatic Acoustic Analysis of Child Speech

Purpose
Heterogeneous child speech was force-aligned to investigate whether (a) manipulating specific parameters could improve alignment accuracy and (b) forced alignment could be used to replicate published results on acoustic characteristics of /s/ production by children.
Method
In Part 1, child speech from 2 corpora was force-aligned with a trainable aligner (Prosodylab-Aligner) under different conditions that systematically manipulated input training data and the type of transcription used. Alignment accuracy was determined by comparing hand and automatic alignments as to how often they overlapped (%-Match) and absolute differences in duration and boundary placements. Using mixed-effects regression, accuracy was modeled as a function of alignment conditions, as well as segment and child age. In Part 2, forced alignments derived from a subset of the alignment conditions in Part 1 were used to extract spectral center of gravity of /s/ productions from young children. These findings were compared to published results that used manual alignments of the same data.
Results
Overall, the results of Part 1 demonstrated that using training data more similar to the data to be aligned as well as phonetic transcription led to improvements in alignment accuracy. Speech from older children was aligned more accurately than younger children. In Part 2, /s/ center of gravity extracted from force-aligned segments was found to diverge in the speech of male and female children, replicating the pattern found in previous work using manually aligned segments. This was true even for the least accurate forced alignment method.
Conclusions
Alignment accuracy of child speech can be improved by using more specific training and transcription. However, poor alignment accuracy was not found to impede acoustic analysis of /s/ produced by even very young children. Thus, forced alignment presents a useful tool for the analysis of child speech.
Supplemental Material
https://doi.org/10.23641/asha.7070105

from #Audiology via ola Kala on Inoreader https://ift.tt/2NoQVCh
via IFTTT

Examining Factors Influencing the Viability of Automatic Acoustic Analysis of Child Speech

Purpose
Heterogeneous child speech was force-aligned to investigate whether (a) manipulating specific parameters could improve alignment accuracy and (b) forced alignment could be used to replicate published results on acoustic characteristics of /s/ production by children.
Method
In Part 1, child speech from 2 corpora was force-aligned with a trainable aligner (Prosodylab-Aligner) under different conditions that systematically manipulated input training data and the type of transcription used. Alignment accuracy was determined by comparing hand and automatic alignments as to how often they overlapped (%-Match) and absolute differences in duration and boundary placements. Using mixed-effects regression, accuracy was modeled as a function of alignment conditions, as well as segment and child age. In Part 2, forced alignments derived from a subset of the alignment conditions in Part 1 were used to extract spectral center of gravity of /s/ productions from young children. These findings were compared to published results that used manual alignments of the same data.
Results
Overall, the results of Part 1 demonstrated that using training data more similar to the data to be aligned as well as phonetic transcription led to improvements in alignment accuracy. Speech from older children was aligned more accurately than younger children. In Part 2, /s/ center of gravity extracted from force-aligned segments was found to diverge in the speech of male and female children, replicating the pattern found in previous work using manually aligned segments. This was true even for the least accurate forced alignment method.
Conclusions
Alignment accuracy of child speech can be improved by using more specific training and transcription. However, poor alignment accuracy was not found to impede acoustic analysis of /s/ produced by even very young children. Thus, forced alignment presents a useful tool for the analysis of child speech.
Supplemental Material
https://doi.org/10.23641/asha.7070105

from #Audiology via ola Kala on Inoreader https://ift.tt/2NoQVCh
via IFTTT

Normative Data for a Rapid, Automated Test of Spatial Release From Masking

Purpose
The purpose of this study is to report normative data and predict thresholds for a rapid test of spatial release from masking for speech perception. The test is easily administered and has good repeatability, with the potential to be used in clinics and laboratories. Normative functions were generated for adults varying in age and amounts of hearing loss.
Method
The test of spatial release presents a virtual auditory scene over headphones with 2 conditions: colocated (with target and maskers at 0°) and spatially separated (with target at 0° and maskers at ± 45°). Listener thresholds are determined as target-to-masker ratios, and spatial release from masking (SRM) is determined as the difference between the colocated condition and spatially separated condition. Multiple linear regression was used to fit the data from 82 adults 18–80 years of age with normal to moderate hearing loss (0–40 dB HL pure-tone average [PTA]). The regression equations were then used to generate normative functions that relate age (in years) and hearing thresholds (as PTA) to target-to-masker ratios and SRM.
Results
Normative functions were able to predict thresholds with an error of less than 3.5 dB in all conditions. In the colocated condition, the function included only age as a predictive parameter, whereas in the spatially separated condition, both age and PTA were included as parameters. For SRM, PTA was the only significant predictor. Different functions were generated for the 1st run, the 2nd run, and the average of the 2 runs. All 3 functions were largely similar in form, with the smallest error being associated with the function on the basis of the average of 2 runs.
Conclusion
With the normative functions generated from this data set, it would be possible for a researcher or clinician to interpret data from a small number of participants or even a single patient without having to first collect data from a control group, substantially reducing the time and resources needed.
Supplemental Material
https://doi.org/10.23641/asha.7080878

from #Audiology via ola Kala on Inoreader https://ift.tt/2DkgwHU
via IFTTT

A Study of Social Media Utilization by Individuals With Tinnitus

Purpose
As more people experience tinnitus, social awareness of tinnitus has consequently increased, due in part to the Internet. Social media platforms are being used increasingly by patients to seek health-related information for various conditions including tinnitus. These online platforms may be used to seek guidance from and share experiences with individuals suffering from a similar disorder. Some social media platforms can also be used to communicate with health care providers. The aim of this study was to investigate the prevalence of tinnitus-related information on social media platforms.
Method
The present investigation analyzed the portrayal of tinnitus-related information across 3 social media platforms: Facebook (pages and groups), Twitter, and YouTube. We performed a comprehensive analysis of the platforms using the key words “tinnitus” and “ringing in the ears.” The results on each platform were manually examined by 2 reviewers based on social media activity metrics, such as “likes,” “followers,” and “comments.”
Results
The different social media platforms yielded diverse results, allowing individuals to learn about tinnitus, seek support, advocate for tinnitus awareness, and connect with medical professionals. The greatest activity was seen on Facebook pages, followed by YouTube videos. Various degrees of misinformation were found across all social media platforms.
Conclusions
The present investigation reveals copious amounts of tinnitus-related information on different social media platforms, which the community with tinnitus may use to learn about and cope with the condition. Audiologists must be aware that tinnitus sufferers often turn to social media for additional help and should understand the current climate of how tinnitus is portrayed. Clinicians should be equipped to steer individuals with tinnitus toward valid information.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NmGJu4
via IFTTT

Normative Data for a Rapid, Automated Test of Spatial Release From Masking

Purpose
The purpose of this study is to report normative data and predict thresholds for a rapid test of spatial release from masking for speech perception. The test is easily administered and has good repeatability, with the potential to be used in clinics and laboratories. Normative functions were generated for adults varying in age and amounts of hearing loss.
Method
The test of spatial release presents a virtual auditory scene over headphones with 2 conditions: colocated (with target and maskers at 0°) and spatially separated (with target at 0° and maskers at ± 45°). Listener thresholds are determined as target-to-masker ratios, and spatial release from masking (SRM) is determined as the difference between the colocated condition and spatially separated condition. Multiple linear regression was used to fit the data from 82 adults 18–80 years of age with normal to moderate hearing loss (0–40 dB HL pure-tone average [PTA]). The regression equations were then used to generate normative functions that relate age (in years) and hearing thresholds (as PTA) to target-to-masker ratios and SRM.
Results
Normative functions were able to predict thresholds with an error of less than 3.5 dB in all conditions. In the colocated condition, the function included only age as a predictive parameter, whereas in the spatially separated condition, both age and PTA were included as parameters. For SRM, PTA was the only significant predictor. Different functions were generated for the 1st run, the 2nd run, and the average of the 2 runs. All 3 functions were largely similar in form, with the smallest error being associated with the function on the basis of the average of 2 runs.
Conclusion
With the normative functions generated from this data set, it would be possible for a researcher or clinician to interpret data from a small number of participants or even a single patient without having to first collect data from a control group, substantially reducing the time and resources needed.
Supplemental Material
https://doi.org/10.23641/asha.7080878

from #Audiology via ola Kala on Inoreader https://ift.tt/2DkgwHU
via IFTTT

A Study of Social Media Utilization by Individuals With Tinnitus

Purpose
As more people experience tinnitus, social awareness of tinnitus has consequently increased, due in part to the Internet. Social media platforms are being used increasingly by patients to seek health-related information for various conditions including tinnitus. These online platforms may be used to seek guidance from and share experiences with individuals suffering from a similar disorder. Some social media platforms can also be used to communicate with health care providers. The aim of this study was to investigate the prevalence of tinnitus-related information on social media platforms.
Method
The present investigation analyzed the portrayal of tinnitus-related information across 3 social media platforms: Facebook (pages and groups), Twitter, and YouTube. We performed a comprehensive analysis of the platforms using the key words “tinnitus” and “ringing in the ears.” The results on each platform were manually examined by 2 reviewers based on social media activity metrics, such as “likes,” “followers,” and “comments.”
Results
The different social media platforms yielded diverse results, allowing individuals to learn about tinnitus, seek support, advocate for tinnitus awareness, and connect with medical professionals. The greatest activity was seen on Facebook pages, followed by YouTube videos. Various degrees of misinformation were found across all social media platforms.
Conclusions
The present investigation reveals copious amounts of tinnitus-related information on different social media platforms, which the community with tinnitus may use to learn about and cope with the condition. Audiologists must be aware that tinnitus sufferers often turn to social media for additional help and should understand the current climate of how tinnitus is portrayed. Clinicians should be equipped to steer individuals with tinnitus toward valid information.

from #Audiology via ola Kala on Inoreader https://ift.tt/2NmGJu4
via IFTTT