Δευτέρα 29 Φεβρουαρίου 2016

Education of Audiologists in Underserved Regions

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xr8
via IFTTT

Mystery Surrounds Auditory Neuropathy Spectrum Disorder

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9yLC
via IFTTT

Symptom: Right-Sided Tinnitus

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xr6
via IFTTT

Mental Well-Being Tightly Linked to Hearing Health

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xr2
via IFTTT

Beyond Hearing Loss: Self-Management in Audiological Practice

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xaK
via IFTTT

Complementary Factors in Processing

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xaG
via IFTTT

Management of Psychosocial Challenges Posed by Communication Breakdowns

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9vQ8
via IFTTT

Manufacturers News

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/21y9LHQ
via IFTTT

Education of Audiologists in Underserved Regions

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9xr8
via IFTTT

Mystery Surrounds Auditory Neuropathy Spectrum Disorder

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9yLC
via IFTTT

Symptom: Right-Sided Tinnitus

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9xr6
via IFTTT

Mental Well-Being Tightly Linked to Hearing Health

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9xr2
via IFTTT

Beyond Hearing Loss: Self-Management in Audiological Practice

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9xaK
via IFTTT

Complementary Factors in Processing

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9xaG
via IFTTT

Management of Psychosocial Challenges Posed by Communication Breakdowns

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RC9vQ8
via IFTTT

Manufacturers News

imageNo abstract available

from #Audiology via xlomafota13 on Inoreader http://ift.tt/21y9LHQ
via IFTTT

Education of Audiologists in Underserved Regions

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xr8
via IFTTT

Mystery Surrounds Auditory Neuropathy Spectrum Disorder

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9yLC
via IFTTT

Symptom: Right-Sided Tinnitus

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xr6
via IFTTT

Mental Well-Being Tightly Linked to Hearing Health

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xr2
via IFTTT

Beyond Hearing Loss: Self-Management in Audiological Practice

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xaK
via IFTTT

Complementary Factors in Processing

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9xaG
via IFTTT

Management of Psychosocial Challenges Posed by Communication Breakdowns

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/1RC9vQ8
via IFTTT

Manufacturers News

imageNo abstract available

from #Audiology via ola Kala on Inoreader http://ift.tt/21y9LHQ
via IFTTT

Best Masking Sound For Tinnitus

Tinnitus is a condition often described as a “ringing” or “buzzing” sound in the ears. Tinnitus can be continuous or intermittent, and the sound may be loud or soft and subtle. The condition is fairly common, and many people simply adjust to the ongoing sounds in their ears without huge difficulty. For some people, however, the condition can be very loud and extreme. It can interfere with normal hearing, even though the Tinnitus is not necessarily causing the hearing loss.

Searching For Relief

Tinnitus can appear for many reasons. The top cause is exposure to loud noise, like gunshots or loud machinery. Infections and ear blockages can also bring on the condition. Some medications, including aspirin and antidepressants, can bring on Tinnitus. For some people the sound can be continuous and go on through the night, causing sleep loss. This can bring on a vicious circle, as fatigue and stress are also thought to be linked to Tinnitus.

There is no known cure for Tinnitus at this time, but some people have found relief by listening to another sound that helps block the sound. This is called a “masking sound.” The best masking sound for Tinnitus may vary for different people, but here are a few sounds that are said to work well to alleviate the symptoms of tinnitus.

Sound Therapies

Finding the best masking sound for Tinnitus is an important part of therapy for this condition. The best masking sound for Tinnitus, whether it is the sound of rainfall, a relaxing ocean surf sound, or the quiet sound of general “white noise,” can work in several ways to ease the anxiety that Tinnitus can bring on in a patient.

The best masking sound for Tinnitus is one that can completely cover the sound inside the ear, or at least enough to be a distraction. The element of distraction is important as it can ease symptoms of the condition immediately. The sound masking treatment also helps to train the patient’s brain in a way that makes it tune out the Tinnitus sound. The American Tinnitus Association calls this brain training a way of “classifying” the sound as an “unimportant” (and thus easier to ignore) sound. The ATA also refers to a neuromodulation effect that comes with sound masking, as the masking sounds can help relieve hyperactivity in the brain that is also thought to bring on Tinnitus.

Today there are many options for masking sounds, ranging from peaceful ocean sounds to nature sounds to white noise, or white noise without high frequencies. Masking sounds can be played from a special player or even put in ear pieces.

The good news is that masking sounds do bring relief and they bring it in a way that is low cost and has no side effects. That’s a great bit of news for Tinnitus sufferers everywhere, to be sure.




from #Audiology via xlomafota13 on Inoreader http://ift.tt/21F1cYl
via IFTTT

Compressive sensing with a spherical microphone array

A wave expansion method is proposed in this work, based on measurements with a spherical microphone array, and formulated in the framework provided by Compressive Sensing. The method promotes sparse solutions via ℓ1-norm minimization, so that the measured data are represented by few basis functions. This results in fine spatial resolution and accuracy. This publication covers the theoretical background of the method, including experimental results that illustrate some of the fundamental differences with the “conventional” least-squares approach. The proposed methodology is relevant for source localization, sound field reconstruction, and sound fieldanalysis.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ScO5uW
via IFTTT

Differential Group Delay of the Frequency Following Response Measured Vertically and Horizontally

Abstract

The frequency following response (FFR) arises from the sustained neural activity of a population of neurons that are phase locked to periodic acoustic stimuli. Determining the source of the FFR noninvasively may be useful for understanding the function of phase locking in the auditory pathway to the temporal envelope and fine structure of sounds. The current study compared the FFR recorded with a horizontally aligned (mastoid-to-mastoid) electrode montage and a vertically aligned (forehead-to-neck) electrode montage. Unlike previous studies, envelope and fine structure latencies were derived simultaneously from the same narrowband stimuli to minimize differences in cochlear delay. Stimuli were five amplitude-modulated tones centered at 576 Hz, each with a different modulation rate, resulting in different side-band frequencies across stimulus conditions. Changes in response phase across modulation frequency and side-band frequency (group delay) were used to determine the latency of the FFR reflecting phase locking to the envelope and temporal fine structure, respectively. For the FFR reflecting phase locking to the temporal fine structure, the horizontal montage had a shorter group delay than the vertical montage, suggesting an earlier generation source within the auditory pathway. For the FFR reflecting phase locking to the envelope, group delay was longer than that for the fine structure FFR, and no significant difference in group delay was found between montages. However, it is possible that multiple sources of FFR (including the cochlear microphonic) were recorded by each montage, complicating interpretations of the group delay.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1nbrlP2
via IFTTT

Compressive sensing with a spherical microphone array

cm_sbs_024_plain.png

A wave expansion method is proposed in this work, based on measurements with a spherical microphone array, and formulated in the framework provided by Compressive Sensing. The method promotes sparse solutions via ℓ1-norm minimization, so that the measured data are represented by few basis functions. This results in fine spatial resolution and accuracy. This publication covers the theoretical background of the method, including experimental results that illustrate some of the fundamental differences with the “conventional” least-squares approach. The proposed methodology is relevant for source localization, sound field reconstruction, and sound fieldanalysis.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1ScO5uW
via IFTTT

Pulse-spreading harmonic complex as an alternative carrier for vocoder simulations of cochlear implantsa)

cm_sbs_024_plain.png

Noise- and sine-carrier vocoders are often used to acoustically simulate the information transmitted by a cochlear implant(CI). However, sine-waves fail to mimic the broad spread of excitation produced by a CI and noise-bands contain intrinsic modulations that are absent in CIs. The present study proposes pulse-spreading harmonic complexes (PSHCs) as an alternative acoustic carrier in vocoders. Sentence-in-noise recognition was measured in 12 normal-hearing subjects for noise-, sine-, and PSHC-vocoders. Consistent with the amount of intrinsic modulations present in each vocoder condition, the average speech reception threshold obtained with the PSHC-vocoder was higher than with sine-vocoding but lower than with noise-vocoding.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1LPV4Dw
via IFTTT

Κυριακή 28 Φεβρουαρίου 2016

The Influence of Cognitive Factors on Outcomes with Frequency Lowering

 IntroductionSince frequency lowering technology has become commercially available in modern digital hearing aids, researchers have set out to determine what benefits this technology could provide hearing-impaired patients. There is an abundance

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Sbuc7M
via IFTTT

Understanding and Treating Severe and Profound Hearing Loss

Oticon has a history of creating excellent solutions for patients with severe-to-profound hearing loss. With the release of Oticon Dynamo, Sensei Super Power, and the Plus Power products in our performance line categories, we have again raised the ba

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1XUleMt
via IFTTT

Beyond what the eye can see.

Beyond what the eye can see.

Surv Ophthalmol. 2016 Feb 24;

Authors: Ahmad KE, Fraser CL, Sue CM, Barton JJ

Abstract
A 45 year old woman presented with acute sequential optic neuropathy resulting in bilateral complete blindness. No significant visual recovery occurred. Past medical history was relevant for severe pre-eclampsia with resultant renal failure, diabetes mellitus, and sudden bilateral hearing loss when she was. There was a family history of diabetes mellitus in her mother. Testing for common causes of bilateral optic neuropathy did not reveal a diagnosis for her illness. The maternal history of diabetes, and personal history of diabetes and deafness prompted testing for mitochondrial disease. The three primary mitochondrial DNA mutations responsible for Leber Hereditary Optic Neuropathy (LHON), but the patient was subsequently found to have a disease causing mitochondrial DNA mutation, m.13513G>A. The case illustrates the importance of early testing for mitochondrial disease, and demonstrates that LHON like presentations may be missed if testing is limited to the three primary mutations.

PMID: 26921807 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1oQcmMi
via IFTTT

Validity and repeatability of three in-shoe pressure measurement systems

Publication date: Available online 28 February 2016
Source:Gait & Posture
Author(s): Carina Price, Daniel Parker, Christopher Nester
In-shoe pressure measurement devices are used in research and clinic to quantify plantar foot pressures. Various devices are available, differing in size, sensor number and type; therefore accuracy and repeatability. Three devices (Medilogic, Tekscan and Pedar) were examined in a 2 day x 3 trial design, quantifying insole response to regional and whole insole loading. The whole insole protocol applied an even pressure (50-600kPa) to the insole surface for 0-30seconds in the Novel TruBlue™ device. The regional protocol utilised cylinders with contact surfaces of 3.14 and 15.9cm2 to apply pressures of 50 and 200kPa. The validity (% difference and Root Mean Square Error: RMSE) and repeatability (Intra-Class Correlation Coefficient: ICC) of the applied pressures (whole insole) and contact area (regional) were outcome variables. Validity of the Pedar system was highest (RMSE 2.6kPa; difference 3.9%), with the Medilogic (RMSE 27.0kPa; difference 13.4%) and Tekscan (RMSE 27.0kPa; difference 5.9%) systems displaying reduced validity. The average and peak pressures demonstrated high between-day repeatability for all three systems and each insole size (ICC≥0.859). The regional contact area % difference ranged from -97 to +249%, but the ICC demonstrated medium to high between-day repeatability (ICC≥0.797). Due to the varying responses of the systems, the choice of an appropriate pressure measurement device must be based on the loading characteristics and the outcome variables sought. Medilogic and Tekscan were most effective between 200-300kPa; Pedar performed well across all pressures. Contact area was less precise, but relatively repeatable for all systems.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1oH9Siv
via IFTTT

HEI-OC1 Cells as a Model for Investigating Drug Cytotoxicity

alertIcon.gif

Publication date: Available online 27 February 2016
Source:Hearing Research
Author(s): Gilda Kalinec, Pru Thein, Channy Park, Federico Kalinec
The House Ear Institute–Organ of Corti 1 (HEI-OC1) is one of the few, and arguable the most used, mouse auditory cell line available for research purposes. Originally proposed as an in vitro system for screening of ototoxic drugs, it has been used to investigate, among other topics, apoptotic pathways, autophagy and senescence, mechanism of cell protection, inflammatory responses, cell differentiation, effects of hypoxia, oxidative and endoplasmic reticulum stress, and expression of molecular channels and receptors. However, the use of different techniques with different goals resulted in apparent contradictions on the actual response of these cells to some specific treatments. We have now performed studies to characterize the actual response of HEI-OC1 cells to a battery of commonly used pharmacological drugs. We evaluated cell toxicity, apoptosis, viability, proliferation, senescence and autophagy in response to APAP (acetaminophen), cisplatin, dexamethasone, gentamicin, penicillin, neomycin, streptomycin, and tobramycin, at five different doses and two time-points (24 and 48 hours), by flow cytometry techniques and caspase 3/7, MTT, Cytotoxicity, BrdU, Beclin1, LC3 and SA-β-galactosidase assays. We also used HEK-293 and HeLa cells to compare some of the responses of these cells with those of HEI-OC1. Our results indicate that every cell line responds to the each drug in a different way, with HEI-OC1 cells showing a distinctive sensitivity to at least one of the mechanisms under study. Altogether, our results suggest that the HEI-OC1 might be a useful model to investigate biological responses associated with auditory cells, including auditory sensory cells, but a careful approach would be necessary at the time of evaluating drug effects.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ql9hhc
via IFTTT

Human Audiometric Thresholds do not Predict Specific Cellular Damage in the Inner Ear

alertIcon.gif

Publication date: Available online 27 February 2016
Source:Hearing Research
Author(s): Lukas D. Landegger, Demetri Psaltis, Konstantina M. Stankovic
IntroductionAs otology enters the field of gene therapy and human studies commence, the question arises whether audiograms – the current gold standard for the evaluation of hearing function – can consistently predict cellular damage within the human inner ear and thus should be used to define inclusion criteria for trials. Current assumptions rely on the analysis of small groups of human temporal bones post mortem or from psychophysical identification of cochlear “dead regions” in vivo, but a comprehensive study assessing the correlation between audiometric thresholds and cellular damage within the cochlea is lacking.MethodsA total of 131 human temporal bones from 85 adult individuals (ages 19-92 years, median 69 years) with sensorineural hearing loss due to various etiologies were analyzed. Cytocochleograms – which quantify loss of hair cells, neurons, and strial atrophy along the length of the cochlea – were compared with subjects’ latest available audiometric tests prior to death (time range 5 hours to 22 years, median 24 months). The Greenwood function and the equivalent rectangular bandwidth were used to infer, from cytocochleograms, cochlear locations corresponding to frequencies tested in clinical audiograms. Correlation between audiometric thresholds at clinically tested frequencies and cell type-specific damage in those frequency regions was examined by calculating Spearman’s correlation coefficients.ResultsSimilar audiometric profiles reflected widely different cellular damage in the cochlea. In our diverse group of patients, audiometric thresholds tended to be more influenced by hair cell loss than by neuronal loss or strial atrophy. Spearman’s correlation coefficient across frequencies was at most 0.7 and often below 0.5, with 1.0 indicating perfect correlation.ConclusionsAudiometric thresholds do not predict specific cellular damage in the human inner ear. Our study highlights the need for better non- or minimally-invasive tools, such as cochlear endoscopy, to establish cellular-level diagnosis and thereby guide therapy and monitor response to treatment.



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1LN87pm
via IFTTT

Σάββατο 27 Φεβρουαρίου 2016

Motivational engagement in first-time hearing aid users: A feasibility study

10.3109/14992027.2015.1133935<br/>Melanie Ferguson

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ll2fIW
via IFTTT

Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing

10.3109/14992027.2015.1135352<br/>Tim Jürgens

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1WQs0Sl
via IFTTT

The case for earlier cochlear implantation in postlingually deaf adults

10.3109/14992027.2015.1128125<br/>Richard C. Dowell

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1Ll2cNb
via IFTTT

Effects of genetic correction on the differentiation of hair cell-like cells from iPSCs with MYO15A mutation.

Effects of genetic correction on the differentiation of hair cell-like cells from iPSCs with MYO15A mutation.

Cell Death Differ. 2016 Feb 26;

Authors: Chen JR, Tang ZH, Zheng J, Shi HS, Ding J, Qian XD, Zhang C, Chen JL, Wang CC, Li L, Chen JZ, Yin SK, Shao JZ, Huang TS, Chen P, Guan MX, Wang JF

Abstract
Deafness or hearing loss is a major issue in human health. Inner ear hair cells are the main sensory receptors responsible for hearing. Defects in hair cells are one of the major causes of deafness. A combination of induced pluripotent stem cell (iPSC) technology with genome-editing technology may provide an attractive cell-based strategy to regenerate hair cells and treat hereditary deafness in humans. Here, we report the generation of iPSCs from members of a Chinese family carrying MYO15A c.4642G>A and c.8374G>A mutations and the induction of hair cell-like cells from those iPSCs. The compound heterozygous MYO15A mutations resulted in abnormal morphology and dysfunction of the derived hair cell-like cells. We used a CRISPR/Cas9 approach to genetically correct the MYO15A mutation in the iPSCs and rescued the morphology and function of the derived hair cell-like cells. Our data demonstrate the feasibility of generating inner ear hair cells from human iPSCs and the functional rescue of gene mutation-based deafness by using genetic correction.Cell Death and Differentiation advance online publication, 26 February 2016; doi:10.1038/cdd.2016.16.

PMID: 26915297 [PubMed - as supplied by publisher]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/211peuw
via IFTTT

Παρασκευή 26 Φεβρουαρίου 2016

Vestibular Assessment and Rehabilitation: Ten-Year Survey Trends of Audiologists' Opinions and Practice.

Related Articles

Vestibular Assessment and Rehabilitation: Ten-Year Survey Trends of Audiologists' Opinions and Practice.

J Am Acad Audiol. 2016 Feb;27(2):126-40

Authors: Nelson MD, Akin FW, Riska KM, Andresen K, Mondelli SS

Abstract
BACKGROUND: The past decade has yielded changes in the education and training of audiologists and technological advancements that have become widely available for clinical balance function testing. It is unclear if recent advancements in vestibular instrumentation or the transition to an AuD degree have affected audiologists' vestibular clinical practice or opinions.
PURPOSE: The purpose of this study was to examine predominant opinions and practices for vestibular assessment (VA) and vestibular rehabilitation (VR) over the past decade and between master's- and AuD-level audiologists.
METHOD: A 31-question survey was administered to audiologists via U.S. mail in 2003 (N = 7,500) and electronically in 2014 (N = 9,984) with a response rate of 12% and 10%, respectively.
RESULTS: There was an increase in the number of audiologists providing vestibular services in the past decade. Most respondents agreed that audiologists were the most qualified professionals to conduct VA. Less than half of the surveyed audiologists felt that graduate training was adequate for VA. AuD-level audiologists were more satisfied with graduate training and felt more comfortable performing VA compared to master's-level audiologists. Few respondents agreed that audiologists were the most qualified professionals to conduct VR or that graduate training prepared them to conduct VR. The basic vestibular test battery was unchanged across surveys and included: calorics, smooth pursuit, saccades, search for spontaneous, positional, gaze and optokinetic nystagmus, Dix-Hallpike, case history, and hearing evaluation. There was a trend toward greater use of air (versus water) calorics, videonystagmography (versus electronystagmography), and additional tests of vestibular and balance function.
CONCLUSIONS: VA is a growing specialty area in the field of audiology. Better training opportunities are needed to increase audiologists' knowledge and skills for providing vestibular services. The basic tests performed during VA have remained relatively unchanged over the past 10 yr.

PMID: 26905532 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/1R6Q9zH
via IFTTT

A Distributed Network for Social Cognition Enriched for Oxytocin Receptors.

A Distributed Network for Social Cognition Enriched for Oxytocin Receptors.

J Neurosci. 2016 Feb 24;36(8):2517-35

Authors: Mitre M, Marlin BJ, Schiavo JK, Morina E, Norden SE, Hackett TA, Aoki CJ, Chao MV, Froemke RC

Abstract
UNLABELLED: Oxytocin is a neuropeptide important for social behaviors such as maternal care and parent-infant bonding. It is believed that oxytocin receptor signaling in the brain is critical for these behaviors, but it is unknown precisely when and where oxytocin receptors are expressed or which neural circuits are directly sensitive to oxytocin. To overcome this challenge, we generated specific antibodies to the mouse oxytocin receptor and examined receptor expression throughout the brain. We identified a distributed network of female mouse brain regions for maternal behaviors that are especially enriched for oxytocin receptors, including the piriform cortex, the left auditory cortex, and CA2 of the hippocampus. Electron microscopic analysis of the cerebral cortex revealed that oxytocin receptors were mainly expressed at synapses, as well as on axons and glial processes. Functionally, oxytocin transiently reduced synaptic inhibition in multiple brain regions and enabled long-term synaptic plasticity in the auditory cortex. Thus modulation of inhibition may be a general mechanism by which oxytocin can act throughout the brain to regulate parental behaviors and social cognition.
SIGNIFICANCE STATEMENT: Oxytocin is an important peptide hormone involved in maternal behavior and social cognition, but it has been unclear what elements of neural circuits express oxytocin receptors due to the paucity of suitable antibodies. Here, we developed new antibodies to the mouse oxytocin receptor. Oxytocin receptors were found in discrete brain regions and at cortical synapses for modulating excitatory-inhibitory balance and plasticity. These antibodies should be useful for future studies of oxytocin and social behavior.

PMID: 26911697 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/1Q7keBA
via IFTTT

Vestibular Assessment and Rehabilitation: Ten-Year Survey Trends of Audiologists' Opinions and Practice.

Related Articles

Vestibular Assessment and Rehabilitation: Ten-Year Survey Trends of Audiologists' Opinions and Practice.

J Am Acad Audiol. 2016 Feb;27(2):126-40

Authors: Nelson MD, Akin FW, Riska KM, Andresen K, Mondelli SS

Abstract
BACKGROUND: The past decade has yielded changes in the education and training of audiologists and technological advancements that have become widely available for clinical balance function testing. It is unclear if recent advancements in vestibular instrumentation or the transition to an AuD degree have affected audiologists' vestibular clinical practice or opinions.
PURPOSE: The purpose of this study was to examine predominant opinions and practices for vestibular assessment (VA) and vestibular rehabilitation (VR) over the past decade and between master's- and AuD-level audiologists.
METHOD: A 31-question survey was administered to audiologists via U.S. mail in 2003 (N = 7,500) and electronically in 2014 (N = 9,984) with a response rate of 12% and 10%, respectively.
RESULTS: There was an increase in the number of audiologists providing vestibular services in the past decade. Most respondents agreed that audiologists were the most qualified professionals to conduct VA. Less than half of the surveyed audiologists felt that graduate training was adequate for VA. AuD-level audiologists were more satisfied with graduate training and felt more comfortable performing VA compared to master's-level audiologists. Few respondents agreed that audiologists were the most qualified professionals to conduct VR or that graduate training prepared them to conduct VR. The basic vestibular test battery was unchanged across surveys and included: calorics, smooth pursuit, saccades, search for spontaneous, positional, gaze and optokinetic nystagmus, Dix-Hallpike, case history, and hearing evaluation. There was a trend toward greater use of air (versus water) calorics, videonystagmography (versus electronystagmography), and additional tests of vestibular and balance function.
CONCLUSIONS: VA is a growing specialty area in the field of audiology. Better training opportunities are needed to increase audiologists' knowledge and skills for providing vestibular services. The basic tests performed during VA have remained relatively unchanged over the past 10 yr.

PMID: 26905532 [PubMed - in process]



from #Audiology via ola Kala on Inoreader http://ift.tt/1R6Q9zH
via IFTTT

Higher prevalence of autoimmune diseases and longer spells of vertigo in patients affected with familial Ménière's disease: A clinical comparison of familial and sporadic Ménière's disease.

http:--pubs.asha.org-images-b_pubmed_ful Related Articles

Higher prevalence of autoimmune diseases and longer spells of vertigo in patients affected with familial Ménière's disease: A clinical comparison of familial and sporadic Ménière's disease.

Am J Audiol. 2014 Jun;23(2):232-7

Authors: Hietikko E, Sorri M, Männikkö M, Kotimäki J

Abstract
PURPOSE This study compared clinical features, predisposing factors, and concomitant diseases between sporadic and familial Ménière's disease (MD). METHOD Retrospective chart review and postal questionnaire were used. Participants were 250 definite patients with MD (sporadic, n =149; familial, n = 101) who fulfilled the American Academy of Otorhinolaryngology-Head and Neck Surgery (1995) criteria. RESULTS On average, familial patients were affected 5.6 years earlier than sporadic patients, and they suffered from significantly longer spells of vertigo (p = .007). The prevalence of rheumatoid arthritis (p = .002) and other autoimmune diseases (p = .046) was higher among the familial patients, who also had more migraine (p = .036) and hearing impairment (p = .002) in their families. CONCLUSION The clinical features of familial and sporadic MD are very similar in general, but some differences do exist. Familial MD patients are affected earlier and suffer from longer spells of vertigo.

PMID: 24686733 [PubMed - indexed for MEDLINE]



from #Audiology via xlomafota13 on Inoreader http://ift.tt/1S5zj9o
via IFTTT

Cerebral Processing of Emotionally Loaded Acoustic Signals by Tinnitus Patients

This exploratory study determined the activation pattern in nonauditory brain areas in response to acoustic, emotionally positive, negative or neutral stimuli presented to tinnitus patients and control subjects. Ten patients with chronic tinnitus and without measurable hearing loss and 13 matched control subjects were included in the study and subjected to fMRI with a 1.5-tesla scanner. During the scanning procedure, acoustic stimuli of different emotional value were presented to the subjects. Statistical analyses were performed using statistical parametric mapping (SPM 99). The activation pattern induced by emotionally loaded acoustic stimuli differed significantly within and between both groups tested, depending on the kind of stimuli used. Within-group differences included the limbic system, prefrontal regions, temporal association cortices and striatal regions. Tinnitus patients had a pronounced involvement of limbic regions involved in the processing of chimes (positive stimulus) and neutral words (neutral stimulus), strongly suggesting improperly functioning inhibitory mechanisms that were functioning well in the control subjects. This study supports the hypothesis about the existence of a tinnitus-specific brain network. Such a network could respond to any acoustic stimuli by activating limbic areas involved in stress reactivity and emotional processing and by reducing activation of areas responsible for attention and acoustic filtering (thalamus, frontal regions), possibly reinforcing negative effects of tinnitus.
Audiol Neurotol 2016;21:80-87

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1LhJt5o
via IFTTT

Πέμπτη 25 Φεβρουαρίου 2016

Music Lovers and Hearing Aids

Some people cannot imagine life without music - it relaxes, inspires, and for many, completes their lives. Music has a different meaning for each person, and there are individual preferences, but most would agree that music listening should be enjoyable. This seemingly simple task easily can be undermined when hearing loss is present, as listening to just a limited part of the dynamics, or a reduced frequency range of our favorite music could significantly reduce the enjoyment of the experience. Moreover, in several cases, this problem cannot be fixed with modern hearing aids.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1LhfUAM
via IFTTT

Music Lovers and Hearing Aids

Some people cannot imagine life without music - it relaxes, inspires, and for many, completes their lives. Music has a different meaning for each person, and there are individual preferences, but most would agree that music listening should be enjoyable. This seemingly simple task easily can be undermined when hearing loss is present, as listening to just a limited part of the dynamics, or a reduced frequency range of our favorite music could significantly reduce the enjoyment of the experience. Moreover, in several cases, this problem cannot be fixed with modern hearing aids.

from #Audiology via ola Kala on Inoreader http://ift.tt/1LhfUAM
via IFTTT

Music Lovers and Hearing Aids

Some people cannot imagine life without music - it relaxes, inspires, and for many, completes their lives. Music has a different meaning for each person, and there are individual preferences, but most would agree that music listening should be enjoyable. This seemingly simple task easily can be undermined when hearing loss is present, as listening to just a limited part of the dynamics, or a reduced frequency range of our favorite music could significantly reduce the enjoyment of the experience. Moreover, in several cases, this problem cannot be fixed with modern hearing aids.

from #Audiology via ola Kala on Inoreader http://ift.tt/1LhfUAM
via IFTTT

Efficacy of Multiple-Talker Phonetic Identification Training in Postlingually Deafened Cochlear Implant Listeners

Purpose
This study implemented a pretest-intervention-posttest design to examine whether multiple-talker identification training enhanced phonetic perception of the /ba/-/da/ and /wa/-/ja/ contrasts in adult listeners who were deafened postlingually and have cochlear implants (CIs).
Method
Nine CI recipients completed 8 hours of identification training using a custom-designed training package. Perception of speech produced by familiar talkers (talkers used during training) and unfamiliar talkers (talkers not used during training) was measured before and after training. Five additional untrained CI recipients completed identical pre- and posttests over the same time course as the trainees to control for procedural learning effects.
Results
Perception of the speech contrasts produced by the familiar talkers significantly improved for the trained CI listeners, and effects of perceptual learning transferred to unfamiliar talkers. Such training-induced significant changes were not observed in the control group.
Conclusion
The data provide initial evidence of the efficacy of the multiple-talker identification training paradigm for CI users who were deafened postlingually. This pattern of results is consistent with enhanced phonemic categorization of the trained speech sounds.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QJjE0v
via IFTTT

English Language Learners' Nonword Repetition Performance: The Influence of Age, L2 Vocabulary Size, Length of L2 Exposure, and L1 Phonology

Purpose
This study examined individual differences in English language learners' (ELLs) nonword repetition (NWR) accuracy, focusing on the effects of age, English vocabulary size, length of exposure to English, and first-language (L1) phonology.
Method
Participants were 75 typically developing ELLs (mean age 5;8 [years;months]) whose exposure to English began on average at age 4;4. Children spoke either a Chinese language or South Asian language as an L1 and were given English standardized tests for NWR and receptive vocabulary.
Results
Although the majority of ELLs scored within or above the monolingual normal range (71%), 29% scored below. Mixed logistic regression modeling revealed that a larger English vocabulary, longer English exposure, South Asian L1, and older age all had significant and positive effects on ELLs' NWR accuracy. Error analyses revealed the following L1 effect: onset consonants were produced more accurately than codas overall, but this effect was stronger for the Chinese group whose L1s have a more limited coda inventory compared with English.
Conclusion
ELLs' NWR performance is influenced by a number of factors. Consideration of these factors is important in deciding whether monolingual norm referencing is appropriate for ELL children.

from #Audiology via ola Kala on Inoreader http://ift.tt/1TvvVpp
via IFTTT

Masking Release in Children and Adults With Hearing Loss When Using Amplification

Purpose
This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared.
Method
Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator.
Results
Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression.
Conclusions
The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.

from #Audiology via ola Kala on Inoreader http://ift.tt/1WjFyoV
via IFTTT

Sentence Recall by Children With SLI Across Two Nonmainstream Dialects of English

Purpose
The inability to accurately recall sentences has proven to be a clinical marker of specific language impairment (SLI); this task yields moderate-to-high levels of sensitivity and specificity. However, it is not yet known if these results hold for speakers of dialects whose nonmainstream grammatical productions overlap with those that are produced at high rates by children with SLI.
Method
Using matched groups of 70 African American English speakers and 36 Southern White English speakers and dialect-strategic scoring, we examined children's sentence recall abilities as a function of their dialect and clinical status (SLI vs. typically developing [TD]).
Results
For both dialects, the SLI group earned lower sentence recall scores than the TD group with sensitivity and specificity values ranging from .80 to .94, depending on the analysis. Children with SLI, as compared with TD controls, manifested lower levels of verbatim recall, more ungrammatical recalls when the recall was not exact, and higher levels of error on targeted functional categories, especially those marking tense.
Conclusion
When matched groups are examined and dialect-strategic scoring is used, sentence recall yields moderate-to-high levels of diagnostic accuracy to identify SLI within speakers of nonmainstream dialects of English.

from #Audiology via ola Kala on Inoreader http://ift.tt/1P0d1jm
via IFTTT

Efficacy of Multiple-Talker Phonetic Identification Training in Postlingually Deafened Cochlear Implant Listeners

Purpose
This study implemented a pretest-intervention-posttest design to examine whether multiple-talker identification training enhanced phonetic perception of the /ba/-/da/ and /wa/-/ja/ contrasts in adult listeners who were deafened postlingually and have cochlear implants (CIs).
Method
Nine CI recipients completed 8 hours of identification training using a custom-designed training package. Perception of speech produced by familiar talkers (talkers used during training) and unfamiliar talkers (talkers not used during training) was measured before and after training. Five additional untrained CI recipients completed identical pre- and posttests over the same time course as the trainees to control for procedural learning effects.
Results
Perception of the speech contrasts produced by the familiar talkers significantly improved for the trained CI listeners, and effects of perceptual learning transferred to unfamiliar talkers. Such training-induced significant changes were not observed in the control group.
Conclusion
The data provide initial evidence of the efficacy of the multiple-talker identification training paradigm for CI users who were deafened postlingually. This pattern of results is consistent with enhanced phonemic categorization of the trained speech sounds.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QJjE0v
via IFTTT

English Language Learners' Nonword Repetition Performance: The Influence of Age, L2 Vocabulary Size, Length of L2 Exposure, and L1 Phonology

Purpose
This study examined individual differences in English language learners' (ELLs) nonword repetition (NWR) accuracy, focusing on the effects of age, English vocabulary size, length of exposure to English, and first-language (L1) phonology.
Method
Participants were 75 typically developing ELLs (mean age 5;8 [years;months]) whose exposure to English began on average at age 4;4. Children spoke either a Chinese language or South Asian language as an L1 and were given English standardized tests for NWR and receptive vocabulary.
Results
Although the majority of ELLs scored within or above the monolingual normal range (71%), 29% scored below. Mixed logistic regression modeling revealed that a larger English vocabulary, longer English exposure, South Asian L1, and older age all had significant and positive effects on ELLs' NWR accuracy. Error analyses revealed the following L1 effect: onset consonants were produced more accurately than codas overall, but this effect was stronger for the Chinese group whose L1s have a more limited coda inventory compared with English.
Conclusion
ELLs' NWR performance is influenced by a number of factors. Consideration of these factors is important in deciding whether monolingual norm referencing is appropriate for ELL children.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1TvvVpp
via IFTTT

Masking Release in Children and Adults With Hearing Loss When Using Amplification

Purpose
This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared.
Method
Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator.
Results
Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression.
Conclusions
The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1WjFyoV
via IFTTT

Sentence Recall by Children With SLI Across Two Nonmainstream Dialects of English

Purpose
The inability to accurately recall sentences has proven to be a clinical marker of specific language impairment (SLI); this task yields moderate-to-high levels of sensitivity and specificity. However, it is not yet known if these results hold for speakers of dialects whose nonmainstream grammatical productions overlap with those that are produced at high rates by children with SLI.
Method
Using matched groups of 70 African American English speakers and 36 Southern White English speakers and dialect-strategic scoring, we examined children's sentence recall abilities as a function of their dialect and clinical status (SLI vs. typically developing [TD]).
Results
For both dialects, the SLI group earned lower sentence recall scores than the TD group with sensitivity and specificity values ranging from .80 to .94, depending on the analysis. Children with SLI, as compared with TD controls, manifested lower levels of verbatim recall, more ungrammatical recalls when the recall was not exact, and higher levels of error on targeted functional categories, especially those marking tense.
Conclusion
When matched groups are examined and dialect-strategic scoring is used, sentence recall yields moderate-to-high levels of diagnostic accuracy to identify SLI within speakers of nonmainstream dialects of English.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1P0d1jm
via IFTTT

Specific Language Impairment, Nonverbal IQ, Attention-Deficit/Hyperactivity Disorder, Autism Spectrum Disorder, Cochlear Implants, Bilingualism, and Dialectal Variants: Defining the Boundaries, Clarifying Clinical Conditions, and Sorting Out Causes

Purpose
The purpose of this research forum article is to provide an overview of a collection of invited articles on the topic “specific language impairment (SLI) in children with concomitant health conditions or nonmainstream language backgrounds.” Topics include SLI, attention-deficit/hyperactivity disorder, autism spectrum disorder, cochlear implants, bilingualism, and dialectal language learning contexts.
Method
The topic is timely due to current debates about the diagnosis of SLI. An overarching comparative conceptual framework is provided for comparisons of SLI with other clinical conditions. Comparisons of SLI in children with low-normal or normal nonverbal IQ illustrate the unexpected outcomes of 2 × 2 comparison designs.
Results
Comparative studies reveal unexpected relationships among speech, language, cognitive, and social dimensions of children's development as well as precise ways to identify children with SLI who are bilingual or dialect speakers.
Conclusions
The diagnosis of SLI is essential for elucidating possible causal pathways of language impairments, risks for language impairments, assessments for identification of language impairments, linguistic dimensions of language impairments, and long-term outcomes. Although children's language acquisition is robust under high levels of risk, unexplained individual variations in language acquisition lead to persistent language impairments.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QJjFkR
via IFTTT

Language Impairment in the Attention-Deficit/Hyperactivity Disorder Context

Purpose
Attention-deficit/hyperactivity disorder (ADHD) is a ubiquitous designation that affects the identification, assessment, treatment, and study of pediatric language impairments (LIs).
Method
Current literature is reviewed in 4 areas: (a) the capacity of psycholinguistic, neuropsychological, and socioemotional behavioral indices to differentiate cases of LI from ADHD; (b) the impact of co-occurring ADHD on children's LI; (c) cross-etiology comparisons of the nonlinguistic abilities of children with ADHD and specific LI (SLI); and (d) the extent to which ADHD contributes to educational and health disparities among individuals with LI.
Results
Evidence is presented demonstrating the value of using adjusted parent ratings of ADHD symptoms and targeted assessments of children's tense marking, nonword repetition, and sentence recall for differential diagnosis and the identification of comorbidity. Reports suggest that the presence of ADHD does not aggravate children's LI. The potential value of cross-etiology comparisons testing the necessity and sufficiency of proposed nonlinguistic contributors to the etiology of SLI is demonstrated through key studies. Reports suggest that children with comorbid ADHD+LI receive speech-language services at a higher rate than children with SLI.
Conclusion
The ADHD context is multifaceted and provides the management and study of LI with both opportunities and obstacles.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RrA1eY
via IFTTT

Visual Speech Perception in Children With Language Learning Impairments

Purpose
The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face.
Method
In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker.
Results
Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception.
Conclusion
Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1otqHNV
via IFTTT

Risk Factors Associated With Language in Autism Spectrum Disorder: Clues to Underlying Mechanisms

Purpose
Identifying risk factors associated with neurodevelopmental disorders is an important line of research, as it will lead to earlier identification of children who could benefit from interventions that support optimal developmental outcomes. The primary goal of this review was to summarize research on risk factors associated with autism spectrum disorder (ASD).
Method
The review focused on studies of infants who have older siblings with ASD, with particular emphasis on risk factors associated with language impairment that affects the majority of children with ASD. Findings from this body of work were compared to the literature on specific language impairment.
Results
A wide range of risk factors has been found for ASD, including demographic (e.g., male, family history), behavioral (e.g., gesture, motor) and neural risk markers (e.g., atypical lateralization for speech and reduced functional connectivity). Environmental factors, such as caregiver interaction, have not been found to predict language outcomes. Many of the risk markers for ASD are also found in studies of risk for specific language impairment, including demographic, behavioral, and neural factors.
Conclusions
There are significant gaps in the literature and limitations in the current research that preclude direct cross-syndrome comparisons. Future research directions are outlined that could address these limitations.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RrA1eS
via IFTTT

Racial Variations in Velopharyngeal and Craniometric Morphology in Children: An Imaging Study

Purpose
The purpose of this study is to examine craniometric and velopharyngeal anatomy among young children (4–8 years of age) with normal anatomy across Black and White racial groups.
Method
Thirty-two healthy children (16 White and 16 Black) with normal velopharyngeal anatomy participated and successfully completed the magnetic resonance imaging scans. Measurements included 11 craniofacial and 9 velopharyngeal measures.
Results
Two-way analysis of covariance was used to determine the effects of race and sex on velopharyngeal measures and all craniometric measures except head circumference. Head circumference was included as a covariate to control for overall cranial size. Sex did not have a significant effect on any of the craniometric measures. Significant racial differences were demonstrated for face height. A significant race effect was also observed for mean velar length, velar thickness, and velopharyngeal ratio.
Conclusion
The present study provides separate craniofacial and velopharyngeal values for young Black and White children. Data from this study can be used to examine morphological variations with respect to race and sex.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/24axz3l
via IFTTT

An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

Purpose
The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements.
Method
The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors.
Results
When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front).
Conclusion
We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1L1Y92X
via IFTTT

Pragmatic Language Features of Mothers With the FMR1 Premutation Are Associated With the Language Outcomes of Adolescents and Young Adults With Fragile X Syndrome

Purpose
Pragmatic language difficulties have been documented as part of the FMR1 premutation phenotype, yet the interplay between these features in mothers and the language outcomes of their children with fragile X syndrome is unknown. This study aimed to determine whether pragmatic language difficulties in mothers with the FMR1 premutation are related to the language development of their children.
Method
Twenty-seven mothers with the FMR1 premutation and their adolescent/young adult sons with fragile X syndrome participated. Maternal pragmatic language violations were rated from conversational samples using the Pragmatic Rating Scale (Landa et al., 1992). Children completed standardized assessments of vocabulary, syntax, and reading.
Results
Maternal pragmatic language difficulties were significantly associated with poorer child receptive vocabulary and expressive syntax skills, with medium effect sizes.
Conclusions
This work contributes to knowledge of the FMR1 premutation phenotype and its consequences at the family level, with the goal of identifying modifiable aspects of the child's language-learning environment that may promote the selection of treatments targeting the specific needs of families affected by fragile X. Findings contribute to our understanding of the multifaceted environment in which children with fragile X syndrome learn language and highlight the importance of family-centered intervention practices for this group.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1PLEmsS
via IFTTT

Persistent Language Delay Versus Late Language Emergence in Children With Early Cochlear Implantation

Purpose
The purpose of the present investigation is to differentiate children using cochlear implants (CIs) who did or did not achieve age-appropriate language scores by midelementary grades and to identify risk factors for persistent language delay following early cochlear implantation.
Materials and Method
Children receiving unilateral CIs at young ages (12–38 months) were tested longitudinally and classified with normal language emergence (n = 19), late language emergence (n = 22), or persistent language delay (n = 19) on the basis of their test scores at 4.5 and 10.5 years of age. Relative effects of demographic, audiological, linguistic, and academic characteristics on language emergence were determined.
Results
Age at CI was associated with normal language emergence but did not differentiate late emergence from persistent delay. Children with persistent delay were more likely to use left-ear implants and older speech processor technology. They experienced higher aided thresholds and lower speech perception scores. Persistent delay was foreshadowed by low morphosyntactic and phonological diversity in preschool. Logistic regression analysis predicted normal language emergence with 84% accuracy and persistent language delay with 74% accuracy.
Conclusion
CI characteristics had a strong effect on persistent versus resolving language delay, suggesting that right-ear (or bilateral) devices, technology upgrades, and improved audibility may positively influence long-term language outcomes.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QJjDJN
via IFTTT

The Development of English as a Second Language With and Without Specific Language Impairment: Clinical Implications

Purpose
The purpose of this research forum article is to provide an overview of typical and atypical development of English as a second language (L2) and to present strategies for clinical assessment with English language learners (ELLs).
Method
A review of studies examining the lexical, morphological, narrative, and verbal memory abilities of ELLs is organized around 3 topics: timeframe and characteristics of typical English L2 development, comparison of the English L2 development of children with and without specific language impairment (SLI), and strategies for more effective assessment with ELLs.
Results
ELLs take longer than 3 years to converge on monolingual norms and approach monolingual norms asynchronously across linguistic subdomains. Individual variation is predicted by age, first language, language learning aptitude, length of exposure to English in school, maternal education, and richness of the English environment outside school. ELLs with SLI acquire English more slowly than ELLs with typical development; their morphological and nonword repetition abilities differentiate them the most. Use of strategies such as parent questionnaires on first language development and ELL norm referencing can result in accurate discrimination of ELLs with SLI.
Conclusions
Variability in the language abilities of ELLs presents challenges for clinical practice. Increased knowledge of English language learning development with and without SLI together with evidence-based alternative assessment strategies can assist in overcoming these challenges.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1QYYESH
via IFTTT

Effects of Removing Low-Frequency Electric Information on Speech Perception With Bimodal Hearing

Purpose
The objective was to determine whether speech perception could be improved for bimodal listeners (those using a cochlear implant [CI] in one ear and hearing aid in the contralateral ear) by removing low-frequency information provided by the CI, thereby reducing acoustic–electric overlap.
Method
Subjects were adult CI subjects with at least 1 year of CI experience. Nine subjects were evaluated in the CI-only condition (control condition), and 26 subjects were evaluated in the bimodal condition. CIs were programmed with 4 experimental programs in which the low cutoff frequency (LCF) was progressively raised. Speech perception was evaluated using Consonant-Nucleus-Consonant words in quiet, AzBio sentences in background babble, and spondee words in background babble.
Results
The CI-only group showed decreased speech perception in both quiet and noise as the LCF was raised. Bimodal subjects with better hearing in the hearing aid ear ( 60 dB HL at 250 and 500 Hz) performed similarly to the CI-only group.
Conclusions
These findings suggest that reducing low-frequency overlap of the CI and contralateral hearing aid may improve performance in quiet for some bimodal listeners with better hearing.

from #Audiology via xlomafota13 on Inoreader http://ift.tt/1RzzERw
via IFTTT

Efficacy of Multiple-Talker Phonetic Identification Training in Postlingually Deafened Cochlear Implant Listeners

Purpose
This study implemented a pretest-intervention-posttest design to examine whether multiple-talker identification training enhanced phonetic perception of the /ba/-/da/ and /wa/-/ja/ contrasts in adult listeners who were deafened postlingually and have cochlear implants (CIs).
Method
Nine CI recipients completed 8 hours of identification training using a custom-designed training package. Perception of speech produced by familiar talkers (talkers used during training) and unfamiliar talkers (talkers not used during training) was measured before and after training. Five additional untrained CI recipients completed identical pre- and posttests over the same time course as the trainees to control for procedural learning effects.
Results
Perception of the speech contrasts produced by the familiar talkers significantly improved for the trained CI listeners, and effects of perceptual learning transferred to unfamiliar talkers. Such training-induced significant changes were not observed in the control group.
Conclusion
The data provide initial evidence of the efficacy of the multiple-talker identification training paradigm for CI users who were deafened postlingually. This pattern of results is consistent with enhanced phonemic categorization of the trained speech sounds.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QJjE0v
via IFTTT

English Language Learners' Nonword Repetition Performance: The Influence of Age, L2 Vocabulary Size, Length of L2 Exposure, and L1 Phonology

Purpose
This study examined individual differences in English language learners' (ELLs) nonword repetition (NWR) accuracy, focusing on the effects of age, English vocabulary size, length of exposure to English, and first-language (L1) phonology.
Method
Participants were 75 typically developing ELLs (mean age 5;8 [years;months]) whose exposure to English began on average at age 4;4. Children spoke either a Chinese language or South Asian language as an L1 and were given English standardized tests for NWR and receptive vocabulary.
Results
Although the majority of ELLs scored within or above the monolingual normal range (71%), 29% scored below. Mixed logistic regression modeling revealed that a larger English vocabulary, longer English exposure, South Asian L1, and older age all had significant and positive effects on ELLs' NWR accuracy. Error analyses revealed the following L1 effect: onset consonants were produced more accurately than codas overall, but this effect was stronger for the Chinese group whose L1s have a more limited coda inventory compared with English.
Conclusion
ELLs' NWR performance is influenced by a number of factors. Consideration of these factors is important in deciding whether monolingual norm referencing is appropriate for ELL children.

from #Audiology via ola Kala on Inoreader http://ift.tt/1TvvVpp
via IFTTT

Masking Release in Children and Adults With Hearing Loss When Using Amplification

Purpose
This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared.
Method
Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator.
Results
Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression.
Conclusions
The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.

from #Audiology via ola Kala on Inoreader http://ift.tt/1WjFyoV
via IFTTT

Sentence Recall by Children With SLI Across Two Nonmainstream Dialects of English

Purpose
The inability to accurately recall sentences has proven to be a clinical marker of specific language impairment (SLI); this task yields moderate-to-high levels of sensitivity and specificity. However, it is not yet known if these results hold for speakers of dialects whose nonmainstream grammatical productions overlap with those that are produced at high rates by children with SLI.
Method
Using matched groups of 70 African American English speakers and 36 Southern White English speakers and dialect-strategic scoring, we examined children's sentence recall abilities as a function of their dialect and clinical status (SLI vs. typically developing [TD]).
Results
For both dialects, the SLI group earned lower sentence recall scores than the TD group with sensitivity and specificity values ranging from .80 to .94, depending on the analysis. Children with SLI, as compared with TD controls, manifested lower levels of verbatim recall, more ungrammatical recalls when the recall was not exact, and higher levels of error on targeted functional categories, especially those marking tense.
Conclusion
When matched groups are examined and dialect-strategic scoring is used, sentence recall yields moderate-to-high levels of diagnostic accuracy to identify SLI within speakers of nonmainstream dialects of English.

from #Audiology via ola Kala on Inoreader http://ift.tt/1P0d1jm
via IFTTT

Specific Language Impairment, Nonverbal IQ, Attention-Deficit/Hyperactivity Disorder, Autism Spectrum Disorder, Cochlear Implants, Bilingualism, and Dialectal Variants: Defining the Boundaries, Clarifying Clinical Conditions, and Sorting Out Causes

Purpose
The purpose of this research forum article is to provide an overview of a collection of invited articles on the topic “specific language impairment (SLI) in children with concomitant health conditions or nonmainstream language backgrounds.” Topics include SLI, attention-deficit/hyperactivity disorder, autism spectrum disorder, cochlear implants, bilingualism, and dialectal language learning contexts.
Method
The topic is timely due to current debates about the diagnosis of SLI. An overarching comparative conceptual framework is provided for comparisons of SLI with other clinical conditions. Comparisons of SLI in children with low-normal or normal nonverbal IQ illustrate the unexpected outcomes of 2 × 2 comparison designs.
Results
Comparative studies reveal unexpected relationships among speech, language, cognitive, and social dimensions of children's development as well as precise ways to identify children with SLI who are bilingual or dialect speakers.
Conclusions
The diagnosis of SLI is essential for elucidating possible causal pathways of language impairments, risks for language impairments, assessments for identification of language impairments, linguistic dimensions of language impairments, and long-term outcomes. Although children's language acquisition is robust under high levels of risk, unexplained individual variations in language acquisition lead to persistent language impairments.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QJjFkR
via IFTTT

Language Impairment in the Attention-Deficit/Hyperactivity Disorder Context

Purpose
Attention-deficit/hyperactivity disorder (ADHD) is a ubiquitous designation that affects the identification, assessment, treatment, and study of pediatric language impairments (LIs).
Method
Current literature is reviewed in 4 areas: (a) the capacity of psycholinguistic, neuropsychological, and socioemotional behavioral indices to differentiate cases of LI from ADHD; (b) the impact of co-occurring ADHD on children's LI; (c) cross-etiology comparisons of the nonlinguistic abilities of children with ADHD and specific LI (SLI); and (d) the extent to which ADHD contributes to educational and health disparities among individuals with LI.
Results
Evidence is presented demonstrating the value of using adjusted parent ratings of ADHD symptoms and targeted assessments of children's tense marking, nonword repetition, and sentence recall for differential diagnosis and the identification of comorbidity. Reports suggest that the presence of ADHD does not aggravate children's LI. The potential value of cross-etiology comparisons testing the necessity and sufficiency of proposed nonlinguistic contributors to the etiology of SLI is demonstrated through key studies. Reports suggest that children with comorbid ADHD+LI receive speech-language services at a higher rate than children with SLI.
Conclusion
The ADHD context is multifaceted and provides the management and study of LI with both opportunities and obstacles.

from #Audiology via ola Kala on Inoreader http://ift.tt/1RrA1eY
via IFTTT

Visual Speech Perception in Children With Language Learning Impairments

Purpose
The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face.
Method
In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker.
Results
Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception.
Conclusion
Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.

from #Audiology via ola Kala on Inoreader http://ift.tt/1otqHNV
via IFTTT

Risk Factors Associated With Language in Autism Spectrum Disorder: Clues to Underlying Mechanisms

Purpose
Identifying risk factors associated with neurodevelopmental disorders is an important line of research, as it will lead to earlier identification of children who could benefit from interventions that support optimal developmental outcomes. The primary goal of this review was to summarize research on risk factors associated with autism spectrum disorder (ASD).
Method
The review focused on studies of infants who have older siblings with ASD, with particular emphasis on risk factors associated with language impairment that affects the majority of children with ASD. Findings from this body of work were compared to the literature on specific language impairment.
Results
A wide range of risk factors has been found for ASD, including demographic (e.g., male, family history), behavioral (e.g., gesture, motor) and neural risk markers (e.g., atypical lateralization for speech and reduced functional connectivity). Environmental factors, such as caregiver interaction, have not been found to predict language outcomes. Many of the risk markers for ASD are also found in studies of risk for specific language impairment, including demographic, behavioral, and neural factors.
Conclusions
There are significant gaps in the literature and limitations in the current research that preclude direct cross-syndrome comparisons. Future research directions are outlined that could address these limitations.

from #Audiology via ola Kala on Inoreader http://ift.tt/1RrA1eS
via IFTTT

Racial Variations in Velopharyngeal and Craniometric Morphology in Children: An Imaging Study

Purpose
The purpose of this study is to examine craniometric and velopharyngeal anatomy among young children (4–8 years of age) with normal anatomy across Black and White racial groups.
Method
Thirty-two healthy children (16 White and 16 Black) with normal velopharyngeal anatomy participated and successfully completed the magnetic resonance imaging scans. Measurements included 11 craniofacial and 9 velopharyngeal measures.
Results
Two-way analysis of covariance was used to determine the effects of race and sex on velopharyngeal measures and all craniometric measures except head circumference. Head circumference was included as a covariate to control for overall cranial size. Sex did not have a significant effect on any of the craniometric measures. Significant racial differences were demonstrated for face height. A significant race effect was also observed for mean velar length, velar thickness, and velopharyngeal ratio.
Conclusion
The present study provides separate craniofacial and velopharyngeal values for young Black and White children. Data from this study can be used to examine morphological variations with respect to race and sex.

from #Audiology via ola Kala on Inoreader http://ift.tt/24axz3l
via IFTTT

An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

Purpose
The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements.
Method
The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors.
Results
When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front).
Conclusion
We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements.

from #Audiology via ola Kala on Inoreader http://ift.tt/1L1Y92X
via IFTTT

Pragmatic Language Features of Mothers With the FMR1 Premutation Are Associated With the Language Outcomes of Adolescents and Young Adults With Fragile X Syndrome

Purpose
Pragmatic language difficulties have been documented as part of the FMR1 premutation phenotype, yet the interplay between these features in mothers and the language outcomes of their children with fragile X syndrome is unknown. This study aimed to determine whether pragmatic language difficulties in mothers with the FMR1 premutation are related to the language development of their children.
Method
Twenty-seven mothers with the FMR1 premutation and their adolescent/young adult sons with fragile X syndrome participated. Maternal pragmatic language violations were rated from conversational samples using the Pragmatic Rating Scale (Landa et al., 1992). Children completed standardized assessments of vocabulary, syntax, and reading.
Results
Maternal pragmatic language difficulties were significantly associated with poorer child receptive vocabulary and expressive syntax skills, with medium effect sizes.
Conclusions
This work contributes to knowledge of the FMR1 premutation phenotype and its consequences at the family level, with the goal of identifying modifiable aspects of the child's language-learning environment that may promote the selection of treatments targeting the specific needs of families affected by fragile X. Findings contribute to our understanding of the multifaceted environment in which children with fragile X syndrome learn language and highlight the importance of family-centered intervention practices for this group.

from #Audiology via ola Kala on Inoreader http://ift.tt/1PLEmsS
via IFTTT

Persistent Language Delay Versus Late Language Emergence in Children With Early Cochlear Implantation

Purpose
The purpose of the present investigation is to differentiate children using cochlear implants (CIs) who did or did not achieve age-appropriate language scores by midelementary grades and to identify risk factors for persistent language delay following early cochlear implantation.
Materials and Method
Children receiving unilateral CIs at young ages (12–38 months) were tested longitudinally and classified with normal language emergence (n = 19), late language emergence (n = 22), or persistent language delay (n = 19) on the basis of their test scores at 4.5 and 10.5 years of age. Relative effects of demographic, audiological, linguistic, and academic characteristics on language emergence were determined.
Results
Age at CI was associated with normal language emergence but did not differentiate late emergence from persistent delay. Children with persistent delay were more likely to use left-ear implants and older speech processor technology. They experienced higher aided thresholds and lower speech perception scores. Persistent delay was foreshadowed by low morphosyntactic and phonological diversity in preschool. Logistic regression analysis predicted normal language emergence with 84% accuracy and persistent language delay with 74% accuracy.
Conclusion
CI characteristics had a strong effect on persistent versus resolving language delay, suggesting that right-ear (or bilateral) devices, technology upgrades, and improved audibility may positively influence long-term language outcomes.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QJjDJN
via IFTTT

The Development of English as a Second Language With and Without Specific Language Impairment: Clinical Implications

Purpose
The purpose of this research forum article is to provide an overview of typical and atypical development of English as a second language (L2) and to present strategies for clinical assessment with English language learners (ELLs).
Method
A review of studies examining the lexical, morphological, narrative, and verbal memory abilities of ELLs is organized around 3 topics: timeframe and characteristics of typical English L2 development, comparison of the English L2 development of children with and without specific language impairment (SLI), and strategies for more effective assessment with ELLs.
Results
ELLs take longer than 3 years to converge on monolingual norms and approach monolingual norms asynchronously across linguistic subdomains. Individual variation is predicted by age, first language, language learning aptitude, length of exposure to English in school, maternal education, and richness of the English environment outside school. ELLs with SLI acquire English more slowly than ELLs with typical development; their morphological and nonword repetition abilities differentiate them the most. Use of strategies such as parent questionnaires on first language development and ELL norm referencing can result in accurate discrimination of ELLs with SLI.
Conclusions
Variability in the language abilities of ELLs presents challenges for clinical practice. Increased knowledge of English language learning development with and without SLI together with evidence-based alternative assessment strategies can assist in overcoming these challenges.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QYYESH
via IFTTT

Specific Language Impairment, Nonverbal IQ, Attention-Deficit/Hyperactivity Disorder, Autism Spectrum Disorder, Cochlear Implants, Bilingualism, and Dialectal Variants: Defining the Boundaries, Clarifying Clinical Conditions, and Sorting Out Causes

Purpose
The purpose of this research forum article is to provide an overview of a collection of invited articles on the topic “specific language impairment (SLI) in children with concomitant health conditions or nonmainstream language backgrounds.” Topics include SLI, attention-deficit/hyperactivity disorder, autism spectrum disorder, cochlear implants, bilingualism, and dialectal language learning contexts.
Method
The topic is timely due to current debates about the diagnosis of SLI. An overarching comparative conceptual framework is provided for comparisons of SLI with other clinical conditions. Comparisons of SLI in children with low-normal or normal nonverbal IQ illustrate the unexpected outcomes of 2 × 2 comparison designs.
Results
Comparative studies reveal unexpected relationships among speech, language, cognitive, and social dimensions of children's development as well as precise ways to identify children with SLI who are bilingual or dialect speakers.
Conclusions
The diagnosis of SLI is essential for elucidating possible causal pathways of language impairments, risks for language impairments, assessments for identification of language impairments, linguistic dimensions of language impairments, and long-term outcomes. Although children's language acquisition is robust under high levels of risk, unexplained individual variations in language acquisition lead to persistent language impairments.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QJjFkR
via IFTTT

Effects of Removing Low-Frequency Electric Information on Speech Perception With Bimodal Hearing

Purpose
The objective was to determine whether speech perception could be improved for bimodal listeners (those using a cochlear implant [CI] in one ear and hearing aid in the contralateral ear) by removing low-frequency information provided by the CI, thereby reducing acoustic–electric overlap.
Method
Subjects were adult CI subjects with at least 1 year of CI experience. Nine subjects were evaluated in the CI-only condition (control condition), and 26 subjects were evaluated in the bimodal condition. CIs were programmed with 4 experimental programs in which the low cutoff frequency (LCF) was progressively raised. Speech perception was evaluated using Consonant-Nucleus-Consonant words in quiet, AzBio sentences in background babble, and spondee words in background babble.
Results
The CI-only group showed decreased speech perception in both quiet and noise as the LCF was raised. Bimodal subjects with better hearing in the hearing aid ear ( 60 dB HL at 250 and 500 Hz) performed similarly to the CI-only group.
Conclusions
These findings suggest that reducing low-frequency overlap of the CI and contralateral hearing aid may improve performance in quiet for some bimodal listeners with better hearing.

from #Audiology via ola Kala on Inoreader http://ift.tt/1RzzERw
via IFTTT

Language Impairment in the Attention-Deficit/Hyperactivity Disorder Context

Purpose
Attention-deficit/hyperactivity disorder (ADHD) is a ubiquitous designation that affects the identification, assessment, treatment, and study of pediatric language impairments (LIs).
Method
Current literature is reviewed in 4 areas: (a) the capacity of psycholinguistic, neuropsychological, and socioemotional behavioral indices to differentiate cases of LI from ADHD; (b) the impact of co-occurring ADHD on children's LI; (c) cross-etiology comparisons of the nonlinguistic abilities of children with ADHD and specific LI (SLI); and (d) the extent to which ADHD contributes to educational and health disparities among individuals with LI.
Results
Evidence is presented demonstrating the value of using adjusted parent ratings of ADHD symptoms and targeted assessments of children's tense marking, nonword repetition, and sentence recall for differential diagnosis and the identification of comorbidity. Reports suggest that the presence of ADHD does not aggravate children's LI. The potential value of cross-etiology comparisons testing the necessity and sufficiency of proposed nonlinguistic contributors to the etiology of SLI is demonstrated through key studies. Reports suggest that children with comorbid ADHD+LI receive speech-language services at a higher rate than children with SLI.
Conclusion
The ADHD context is multifaceted and provides the management and study of LI with both opportunities and obstacles.

from #Audiology via ola Kala on Inoreader http://ift.tt/1RrA1eY
via IFTTT

Visual Speech Perception in Children With Language Learning Impairments

Purpose
The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face.
Method
In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker.
Results
Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception.
Conclusion
Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.

from #Audiology via ola Kala on Inoreader http://ift.tt/1otqHNV
via IFTTT

Risk Factors Associated With Language in Autism Spectrum Disorder: Clues to Underlying Mechanisms

Purpose
Identifying risk factors associated with neurodevelopmental disorders is an important line of research, as it will lead to earlier identification of children who could benefit from interventions that support optimal developmental outcomes. The primary goal of this review was to summarize research on risk factors associated with autism spectrum disorder (ASD).
Method
The review focused on studies of infants who have older siblings with ASD, with particular emphasis on risk factors associated with language impairment that affects the majority of children with ASD. Findings from this body of work were compared to the literature on specific language impairment.
Results
A wide range of risk factors has been found for ASD, including demographic (e.g., male, family history), behavioral (e.g., gesture, motor) and neural risk markers (e.g., atypical lateralization for speech and reduced functional connectivity). Environmental factors, such as caregiver interaction, have not been found to predict language outcomes. Many of the risk markers for ASD are also found in studies of risk for specific language impairment, including demographic, behavioral, and neural factors.
Conclusions
There are significant gaps in the literature and limitations in the current research that preclude direct cross-syndrome comparisons. Future research directions are outlined that could address these limitations.

from #Audiology via ola Kala on Inoreader http://ift.tt/1RrA1eS
via IFTTT

Racial Variations in Velopharyngeal and Craniometric Morphology in Children: An Imaging Study

Purpose
The purpose of this study is to examine craniometric and velopharyngeal anatomy among young children (4–8 years of age) with normal anatomy across Black and White racial groups.
Method
Thirty-two healthy children (16 White and 16 Black) with normal velopharyngeal anatomy participated and successfully completed the magnetic resonance imaging scans. Measurements included 11 craniofacial and 9 velopharyngeal measures.
Results
Two-way analysis of covariance was used to determine the effects of race and sex on velopharyngeal measures and all craniometric measures except head circumference. Head circumference was included as a covariate to control for overall cranial size. Sex did not have a significant effect on any of the craniometric measures. Significant racial differences were demonstrated for face height. A significant race effect was also observed for mean velar length, velar thickness, and velopharyngeal ratio.
Conclusion
The present study provides separate craniofacial and velopharyngeal values for young Black and White children. Data from this study can be used to examine morphological variations with respect to race and sex.

from #Audiology via ola Kala on Inoreader http://ift.tt/24axz3l
via IFTTT

An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification

Purpose
The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements.
Method
The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors.
Results
When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front).
Conclusion
We identified a 4-sensor set—that is, T1, T4, UL, LL—that yielded a classification accuracy (91%–95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements.

from #Audiology via ola Kala on Inoreader http://ift.tt/1L1Y92X
via IFTTT

Pragmatic Language Features of Mothers With the FMR1 Premutation Are Associated With the Language Outcomes of Adolescents and Young Adults With Fragile X Syndrome

Purpose
Pragmatic language difficulties have been documented as part of the FMR1 premutation phenotype, yet the interplay between these features in mothers and the language outcomes of their children with fragile X syndrome is unknown. This study aimed to determine whether pragmatic language difficulties in mothers with the FMR1 premutation are related to the language development of their children.
Method
Twenty-seven mothers with the FMR1 premutation and their adolescent/young adult sons with fragile X syndrome participated. Maternal pragmatic language violations were rated from conversational samples using the Pragmatic Rating Scale (Landa et al., 1992). Children completed standardized assessments of vocabulary, syntax, and reading.
Results
Maternal pragmatic language difficulties were significantly associated with poorer child receptive vocabulary and expressive syntax skills, with medium effect sizes.
Conclusions
This work contributes to knowledge of the FMR1 premutation phenotype and its consequences at the family level, with the goal of identifying modifiable aspects of the child's language-learning environment that may promote the selection of treatments targeting the specific needs of families affected by fragile X. Findings contribute to our understanding of the multifaceted environment in which children with fragile X syndrome learn language and highlight the importance of family-centered intervention practices for this group.

from #Audiology via ola Kala on Inoreader http://ift.tt/1PLEmsS
via IFTTT

Persistent Language Delay Versus Late Language Emergence in Children With Early Cochlear Implantation

Purpose
The purpose of the present investigation is to differentiate children using cochlear implants (CIs) who did or did not achieve age-appropriate language scores by midelementary grades and to identify risk factors for persistent language delay following early cochlear implantation.
Materials and Method
Children receiving unilateral CIs at young ages (12–38 months) were tested longitudinally and classified with normal language emergence (n = 19), late language emergence (n = 22), or persistent language delay (n = 19) on the basis of their test scores at 4.5 and 10.5 years of age. Relative effects of demographic, audiological, linguistic, and academic characteristics on language emergence were determined.
Results
Age at CI was associated with normal language emergence but did not differentiate late emergence from persistent delay. Children with persistent delay were more likely to use left-ear implants and older speech processor technology. They experienced higher aided thresholds and lower speech perception scores. Persistent delay was foreshadowed by low morphosyntactic and phonological diversity in preschool. Logistic regression analysis predicted normal language emergence with 84% accuracy and persistent language delay with 74% accuracy.
Conclusion
CI characteristics had a strong effect on persistent versus resolving language delay, suggesting that right-ear (or bilateral) devices, technology upgrades, and improved audibility may positively influence long-term language outcomes.

from #Audiology via ola Kala on Inoreader http://ift.tt/1QJjDJN
via IFTTT